AI Chatbot Companies Should Protect Your Conversations From Bulk Surveillance – Electronic Frontier Foundation


EFF intern Alexandra Halbeck contributed to this blog
When people talk to a chatbot, they often reveal highly personal information they wouldn’t share with anyone else. Chat logs are digital repositories of our most sensitive and revealing information. They are also tempting targets for law enforcement, to which the U.S. Constitution gives only one answer: get a warrant.
AI companies have a responsibility to their users to make sure the warrant requirement is strictly followed, to resist unlawful bulk surveillance requests, and to be transparent with their users about the number of government requests they receive.
Tens of millions of people use chatbots to brainstorm, test ideas, and explore questions they might never post publicly or even admit to another person. Whether advisable or not, people also turn to consumer AI companies for medical information, financial advice, and even dating tips. These conversations reveal people’s most sensitive information.
Without privacy protections, users would be chilled in their use of AI systems.

Consider the sensitivity of the following prompts: “how to get abortion pills,” “how to protect myself at a protest,” or “how to escape an abusive relationship.” These exchanges can reveal everything from health status to political beliefs to private grief. A single chat thread can expose the kind of intimate detail once locked away in a handwritten diary.
Without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help.
Whether you draft an email, edit an online document, or ask a question to a chatbot, you have a reasonable expectation of privacy in that information. Chatbots may be a new technology, but the constitutional principle is old and clear. Before the government can rifle through your private thoughts stored on digital platforms, it must do what it has always been required to do: get a warrant.
For over a century, the Fourth Amendment has protected the content of private communications—such as letters, emails, and search engine prompts—from unreasonable government searches. AI prompts require the same constitutional protection.
This protection is not aspirational—it already exists. The Fourth Amendment draws a bright line around private communications: the government must show probable cause and obtain a particularized warrant before compelling a company to turn over your data. Companies like OpenAI acknowledge this warrant requirement explicitly, while others like Anthropic could stand to be more precise.
AI companies that create chatbots should commit to having your back and resisting unlawful bulk surveillance orders. A valid search warrant requires law enforcement to provide a judge with probable cause and to particularly describe the thing to be searched. This means that bulk surveillance orders often fail that test.
What do these overbroad orders look like? In the past decade or so, police have often sought “reverse” search warrants for user information held by technology companies. Rather than searching for one particular individual, police have demanded that companies rummage through their giant databases of personal data to help develop investigative leads. This has included “tower dumps” or “geofence warrants,” in which police order a company to search all users’ location data to identify anyone that’s been near a particular place at a particular time. It has also included “keyword” warrants, which seek to identify any person who typed a particular phrase into a search engine. This could include a chilling keyword search for a well-known politician’s name or busy street, or a geofence warrant near a protest or church.
Courts are beginning to rule that these broad demands are unconstitutional. And after years of complying, Google has finally made it technically difficult—if not impossible—to provide mass location data in response to a geofence warrant.
This is an old story: if a company stores a lot of data about its users, law enforcement (and private litigants) will eventually seek it out. Law enforcement is already demanding user data from AI chatbot companies, and it will only increase. These companies must be prepared for this onslaught, and they must commit to fighting to protect their users.
In addition to minimizing the amount of data accessible to law enforcement, they can start with three promises to their users. These aren’t radical ideas. They are basic transparency and accountability standards to preserve user trust and to ensure constitutional rights keep pace with technology:
Whether due to generative AI hallucinations or human sloppiness, the internet is increasingly rife with bogus news content—and you can count EFF among the victims. WinBuzzer published a story June 26 with the headline, “Microsoft Is Getting Sued over Using Nearly 200,000 Pirated Books for AI
EFF urges California state lawmakers to pass S.B. 524, authored by Sen. Jesse Arreguín. This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. This bill does several important things: It mandates that police reports written by AI…
If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people…
SAN FRANCISCO – Axon Enterprise’s Draft One product, which uses generative artificial intelligence to write police report narratives based on body-worn camera audio, seems designed to stymie any attempts at auditing, transparency, and accountability, an Electronic Frontier Foundation (EFF) investigation has found. The investigation – based…
Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their…
The Markup revealed in its reporting last month that New York City’s Administration for Children’s Services (ACS) has been quietly deploying an algorithmic tool to categorize families as “high risk”. Using a grab-bag of factors like neighborhood and mother’s age, this AI tool can put families under intensified scrutiny…
Back to top
Check out our 4-star rating on Charity Navigator.

source

Jesse
https://playwithchatgtp.com