Why Sam Altman wants privacy privileges for ChatGPT – qz.com
Kyle Grillot/Bloomberg via Getty Images
ChatGPT’s weekly users make up 10% of the world’s adult population — and Sam Altman wants their chatbot conversations to remain private. The OpenAI CEO has repeatedly said he wants the same privacy protections for AI chatbots that are typically granted to doctors, lawyers, and therapists, to avoid having to hand over compromising user data to courts.
In a recent blog post previewing ChatGPT for teens, Altman reiterated his stance on AI privacy protections: “People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have.”
“If you talk to a doctor about your medical history or a lawyer about a legal situation, we have decided that it’s in society’s best interest for that information to be privileged and provided higher levels of protection,” Altman wrote. “We believe that the same level of protection needs to apply to conversations with AI which people increasingly turn to for sensitive questions and private concerns.”
But there are no such legal protections for chatbot users to date; in a July interview with podcaster Theo Von, Altman warned that courts could compel OpenAI to share its users' private conversations. Already, OpenAI has been ordered by a federal court to hold onto private chat logs, and that puts OpenAI at risk of both alienating users and getting tangled up legal battles.
That depends on who you ask and what they want.
“Companies always prefer to have rules that shield them from liability, and advocates always want to have access to information that they can use to enforce against problems,” said Peter Swire, a law professor at the Georgia Tech Scheller College of Business with a focus on privacy and cybersecurity. “That happens in every sector all the time.”
The road to AI privacy protections won’t be easy. Just in the last few months, AI companies have contended with chatbot lawsuits, state crackdowns on AI therapy, a Stanford study calling out the risks of using therapy chatbots, and Congressional hearings on the broader harms of AI. Meanwhile, OpenAI is actively pushing the boundaries around its access to users' sensitive data.
Advocates in favor of AI regulations have a laundry list of reasons to restrict just how far AI companies can go, potentially pushing Altman’s ideal scenario even further out of reach.
Strict privacy protections exist for doctors, lawyers, and therapists because “it's so important that individuals speak honestly to those people,” Swire said. As for chatbots gaining those same privileges, Swire explained he could imagine a system where privileges are extended in “limited circumstances,” where there’s a “clear announcement” that the chatbot is going to act as an attorney or doctor. But this is a “very different proposal” than treating every interaction with a chatbot “like the most sensitive psychiatric conversation," he said.
“If the chatbot is going to act in the role of a doctor or a lawyer, then it's worth exploring having the privileged protection in those specific circumstances," said Swire.
According to Mayu Tobin-Miyaji, a law fellow who specializes in AI and human rights at the nonprofit Electronic Privacy Information Center, privacy protections could possibly be extended to vetted LLMs under the supervision of doctors, among others, “who are licensed and qualified.” Tobin-Miyaji thinks today's chatbots won't get such protections, but those who “oversee that process" could.
Yet, mental health advocates seem to be pushing lawmakers in another direction altogether.
Mitch Prinstein, chief of psychology strategy and integration at the American Psychological Association, urged Congress during a recent hearing to prohibit AI from “misrepresenting itself as psychologist or therapists” and to “mandate” disclosures that “users are interacting” with a bot.
“[Safety concerns] are not going to go away if there's confidentiality protections in place,” said Tobin-Miyaji.
“There should be more transparency and accountability mechanisms for the public to understand when they're putting in data, how that data is being used, where it's being shared to, [and] how that may impact the people's privacy and autonomy rights,” she continued.
If chatbot users think their private conversations could become court evidence, will they still feel comfortable sharing so much? Some users already seem skittish.
A May court order required OpenAI to indefinitely save ChatGPT conversations — even deleted ones. OpenAI said the data can only be accessed under “strict legal protocols” and that it's challenging the order, but users immediately started voicing their concerns on Reddit, showing how quickly public perception of ChatGPT can shift.
“Even if it never ends up seeing the light of day, the fact that an individual lawyer would potentially have to review people’s temporary chats seems like a privacy harm in and of itself,” a Reddit user wrote.
Another cautioned that users should “assume everything you post online is going to be public in some shape or form.” A third said, “Never. Trust. Corporations. With. Sensitive. Data.”
A recent study on ChatGPT users from OpenAI and Harvard researchers found that about 77% of conversations revolve around practical guidance, writing, and seeking information, while just 1.9% of messages analyzed are related to relationships and “social-emotional issues.” Although this is a small percentage of how users currently spend their time with the chatbot, OpenAI says sensitive chats are on the rise; the lack of protections and growing awareness that chat logs could be pulled into legal battles may change how users ultimately interact with ChatGPT, dealing a blow to the privacy-first narrative that OpenAI — and Altman — are banking on.