Experts Warn You Should Never Share These 5 Things With ChatGPT — And What You Can Do If You Already Did – HuffPost


This newsroom tells stories that reflect real lives and real stakes. Your support ensures that work continues. Join as a member today and close the year by strengthening independent reporting for all.
Already a member? Log in to hide these messages.
On Assignment For HuffPost
It’s becoming increasingly common for people to use ChatGPT and other AI chatbots like Gemini, Copilot and Claude in their everyday lives. A recent survey from Elon University’s Imagining the Digital Future Center found that half of Americans now utilize these technologies.

“By any measure, the adoption and use of LLMs [large language models] is astounding,” Lee Rainie, director of Elon’s Imagining the Digital Future Center, said in a university news release. “I am especially struck by the ways these tools are being woven into people’s social lives.”
And while these tools can be useful when it comes to, say, helping you write an email or brainstorm questions for a doctor’s appointment, it’s wise to be cautious about how much information you share with them.

A recent study from the Stanford Institute for Human-Centered AI helps explain why. Researchers analyzed the privacy policies of six of the top U.S. AI chat system developers (OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Amazon’s Nova, Meta’s MetaAI and Microsoft’s Copilot) and found that all of them appear to use customer conversations to “train and improve their models by default” and “some retain this data indefinitely.”

People underestimate how much of what they share with an AI chatbot can be “stored, analyzed, and potentially reused,” cybersecurity expert George Kamide, co-host of the technology podcast “Bare Knuckles and Brass Tacks,” told HuffPost.

“Many LLMs are trained or fine-tuned using user inputs, which means conversations can contribute — directly or indirectly — to the model’s future behavior,” he continued.

“If those interactions contain personal identifiers, sensitive data, or confidential information, they could become part of a dataset that’s beyond the user’s control. Ultimately, data is the greatest value that AI companies can extract from us.”

Below, experts explain the types of information you should think twice about sharing with an AI chatbot:
Personally identifiable information, known as PII, is any type of data that can be used to identify an individual, including your full name, home address, phone number, and government ID numbers like social security, passport or driver license.

Sharing these details with a chatbot “introduces the risk that this data could be logged or processed in ways that expose you to identity theft, phishing or data brokerage activities,” explained information security expert George Al-Koura, who co-hosts “Bare Knuckles and Brass Tacks.” So it’s best avoided.

Know that any files you upload along with your prompts could also be used for training the model. So if you’re using ChatGPT to help fine-tune your resume, for example, you should remove any of this identifying information from the document beforehand to be safe.
People often feel more comfortable divulging intimate information in a ChatGPT conversation than they would with, say, a Google search because the AI chatbot allows for a back-and-forth dialogue that feels more human in nature.

“This can give a false sense of security leading to a greater willingness to provide personal information via a chatbot than to a static search engine,” Ashley Casovan, the managing director of the International Association of Privacy Professionals (IAPP) AI Governance Center, told HuffPost.

Sensitive details you share about your thoughts, behaviors, mental state or relationships in these conversations are not legally protected and could potentially be used as evidence in court.

“The number of people who are using LLM-based chatbots as therapists, life coaches, and even as some form of an intimate ‘partner’ is already alarming,” Kamide said.
A 2024 poll found that 1 in 6 adults turn to AI chatbots at least once a month for health information and advice, according to health policy organization KFF.

Doing so can be helpful in navigating health issues, but there are privacy risks involved (not to mention concerns about accuracy, too). Unlike doctors, most of the mainstream chatbots are not bound by Health Insurance Portability and Accountability Act, or HIPAA, Dr. Ravi Parikh, director of the Human-Algorithm Collaboration Lab at Emory University, told The New York Times.

Avoid sharing any personal medical details ― including your health care records ― with an AI chatbot. If you’re going to enter health-related data in the conversation, be sure to remove identifying information from your prompts.

If you’re thinking about using an AI chatbot to get a leg up at work, tread lightly. Don’t input internal business data or reports, client data, source code or anything protected by a non-disclosure agreement, Al-Koura advised.

“Many AI chat platforms operate on shared infrastructure, and despite strong security postures, your input may still be logged for ‘model improvement,’” he said. “A single prompt containing sensitive data could constitute a regulatory or contractual breach.”

Your paystubs, banking and investment account information, and credit card details should not be shared with an AI chatbot, the University of Kentucky Information Technology Services advises.

“While AI can offer general financial advice, it’s safer to consult a financial advisor for personal matters to avoid the risk of hacking or data misuse,” a post on the university’s website reads.

Same goes for your tax returns and other income-related documents.

“If these documents are exposed, they can be used for blackmail, fraud or tailored social engineering attacks against you or your family,” financial writer Adam Hayes warned in an Investopedia article.
It may not be possible to put the toothpaste back in the tube, so to speak. But you can still try to mitigate some of the potential harm.

According to Kamide: Once your data is fed into the chatbot’s training data, “you can’t really get it back.” Still, he suggested deleting the chat history “to stop exfiltration of data, should anyone compromise your account.”

Then take some time to think about what information you are (and are not) comfortable sharing with an AI chatbot going forward. Start treating AI conversations as “semi-public spaces rather than private diaries,” Al-Koura recommended.
“Be deliberate and minimalist in what you share. Before sending a message, ask yourself, ‘Would I be comfortable seeing this on a shared family group chat or company Slack channel?’” Al-Koura said.
You can also adjust the privacy settings of any AI chatbots you interact with to reduce (but not eliminate) some of the privacy risks — things like disabling your chat history or opting out of having your conversations used for model training.
“Different tools will allow for different configurations of what data it will ‘remember,’” Casovan said. “Based on your individual comfort and use, exploring these different options will allow you to calibrate based on your comfort level or organizational direction.”
“However, having a good understanding of how these systems work, how the data is stored, who has access, how it is transferred and under what circumstances, will allow you to make more informed decisions on how you can leverage these tools for your benefit, while still being comfortable with the information that you are sharing,” she continued.
When writing your prompts, Al-Koura recommended using pseudonyms and more general language to avoid disclosing too much personal or confidential information. For example, you might use “a client in health care” rather than “a patient at St. Mary’s Hospital” to “preserve context while protecting identity,” he suggested.
But the onus shouldn’t just be on the users of course. AI developers and policymakers should improve protections for personal data via “comprehensive federal privacy regulation, affirmative opt-in for model training, and filtering personal information from chat inputs by default,” researchers from The Stanford Institute for Human-Centered AI said.
Kamide called this a “defining moment for digital ethics.”
“The more these systems can mimic human communication styles, the easier it is to forget they are still just data processors, not confidants or friends,” he said. “If we can cultivate a culture where people stay curious, cautious and privacy-aware — while technologists build responsibly and transparently — we can unlock AI’s full potential without sacrificing trust. In short, we need guardrails in order to innovate responsibly.
By entering your email and clicking Sign Up, you’re agreeing to let us send you customized marketing messages about us and our advertising partners. You are also agreeing to our Terms of Service and Privacy Policy.

source

Jesse
https://playwithchatgtp.com