Non-Profit Helpline Shifts To Chatbots, Then Shuts Down Rogue AI – Forbes

These workers never take a break.
The largest eating disorder non-profit in the country is replacing staff and volunteers with an AI-based online solution, and the chatbot has already been shut down. A half-dozen human staff and a volunteer army of over 200 have been let go, replaced by an AI chatbot named Tessa. However, Tessa is currently out of work as well: the National Eating Disorder Association says that the AI chatbot that was going to replace the humans “may have given” harmful information, according to multiple reports. The bot gained notoriety (and not the good kind) on social media, as posts showed that Tessa was providing weight-loss advice. This advice was outside of its original programming, and beyond the intentions of the organization, according to Liz Thompson, NEDA’s CEO. As a result, NEDA is shutting down the help line entirely, according to the Journal.
In a statement issued on twitter, employees of the National Eating Disorder Association shared that they were told they would be fired and replaced with a chatbot today, June 1st. “Please note that Tessa does not replace therapy nor the NEDA Helpline, but is always available to provide additional support when needed,” Tessa’s website says. The bot came online last week and was inundated with a 600% increase in volume. However, some of the text exchanges pointed towards diet advice. “For someone with an eating disorder, that advice is very dangerous,” says Alexis Conason, a psychologist and eating disorder specialist.
The NEDA helpline, which launched in 1999, served nearly 70,000 people and families last year. Staffers saw the move to AI as union-busting, as they had recently organized within the last week in an effort to combat the change.
Dr. Ellen Fitzsimmons-Craft, a professor of psychiatry at Washington University’s medical school who helped design Tessa, says, “The chatbot was created based on decades of research conducted by myself and my colleagues.” In its original form, the Chatbot was unable to provide unscripted answers. The wellness chatbot “isn’t as sophisticated as ChatGPT,” Dr. Fitzsimmons-Craft says. The intention was for the chatbot to serve up pre-written answers to questions – typically related to body image, so that people could reframe their approach to eating disorders. Dieting tips were not part of the program. Alex Conason, a psychologist who specializes in eating disorders, tells Fortune, “Imagine vulnerable people with eating disorders reaching out to a robot for support because that’s all they have available and receiving responses that further promote the eating disorder.”
When one person dies from an eating disorder in this country every 52 minutes, the risks of misinformation are extremely troubling.
The pandemic provided a perfect storm for eating disorders, one of the unfortunate consequences of rampant loneliness. NPR reports that the NEDA helpline was run by just six paid staffers, and they train and oversee up to 200 volunteers at any given time. The staff felt overwhelmed, under-supported, burned out. There was a ton of turnover, so the helpline staff voted to unionize.
Lauren Smolar, VP at NEDA, says the increase in crisis calls also meant more legal liability. “Our volunteers are volunteers. They’re not professionals. They don’t have crisis training. And we really can’t accept that kind of responsibility. We really need them to go to those services who are appropriate.” Katy Meta, a 20-year-old college student who has volunteered for the helpline, tells NPR, “A lot of these individuals come on multiple times because they have no other outlet to talk with anybody…That’s all they have, is the chat line.”
When eating disorders are spiking, always-on AI chatbots are poised to change the way we receive healthcare information. That’s true for many industries, including travel and financial services – but can the conversation be controlled? Can we put checks and balances in place so that what we receive is good advice, not dangerous misinformation, chatbot improvisation, or made-up guidance? Business leaders must focus on what matters most, by offering advice and counsel that remembers the first words of the Hippocratic oath: “Do no harm…”
Chatbots and artificial intelligence might not be ready for prime time. Especially when lives are at stake.
To be sure, the conversation around artificial conversation is not over. In fact, it’s just getting started. But what does your organization have to say about it? And how will AI and chatbots impact the way that you serve your customers – and treat your employees? Perhaps the deeper question is: what guardrails need to be put in place, so that chatbots will stick to the script?