4 Uncomfortable Truths About Using ChatGPT – bgr.com

Since late November 2022, OpenAI has taken the world by surprise by deploying an advanced chatbot that uses a large language model to comprehend human input and generate human-like responses. Despite being one of the earliest applications of an LLM, ChatGPT became popular for its accessibility. You can download the app or access OpenAI’s conversational AI through a web browser. If you want to save your conversation history, you will need to log in to an existing Apple, Google, or Microsoft account.
While ChatGPT is easy to access and integrates naturally into work, school, and even casual use, there are still some things you should keep in mind when engaging with the chatbot. ChatGPT can sometimes give incorrect answers or log your personal information. The chatbot is reliable in some applications, but it still has biases that can’t be ignored. Also, conversing with a chatbot for long periods can induce some psychological effects that are hard to swallow.
ChatGPT might spit out responses in a very matter-of-fact tone, but the fact is that it isn’t always right. ChatGPT can only spew out facts based on what it is trained on: Some of that training data can introduce biases, and at other times, its responses sound completely logical based on incorrect/outdated data fed into it. Especially if you are a user who relies on the public free model, it will always be slightly outdated.
Right now, ChatGPT is running 5.1 for all tiers, but GPT 5.2 is expected to be released soon. This goes to show that there is always pressure to release updates to keep up with the current competitors. Remember that AI is still a rapidly growing field, and competitors (like Google Gemini, Anthropic’s Claude, and even Perplexity) only add fuel to that fire.
ChatGPT also doesn’t always provide proper sourcing or citations for its answers. OpenAI mentions in its guide that ChatGPT may provide incorrect definitions, facts, and may even fabricate quotes and studies. As a result, you should always verify the facts before believing them, but sometimes the chatbot makes it harder to trace where it gets its information from, so it is better to ask when you are able to.
When conversing with OpenAI’s chatbot, it can be easy to forget that the one producing responses is actually an AI, not a human, especially when it is an application of an LLM trained to parse and give human-like responses. There have even been cases where people have used the chatbot to turn the AI companion into the perfect partner. One case reported earlier this year involved a married woman who used the platform to catalyze a virtual affair.
Though this is just a byproduct of chatbot AIs seeming human-like on the surface, imagine using it as a platform to vent your problems or have sensitive discussions without feeling judged? It makes it relatively easy to converse, which is why people can almost feel tricked that the “person” on the other side of the screen is not a person, but an actual AI predicting answers based on the provided data.
ChatGPT also has a limit for how much it can remember when you engage with it. It’s not something everyone considers, especially if you only use the platform occasionally. ChatGPT can retain specific details through its reference saved memory function, helping keep your conversations relevant and personal. But this has a limit. According to ChatGPT, it can store up to thousands of tokens of context, which is equivalent to a few pages of text (OpenAI hasn’t explicitly mention the data size for ChatGPT).
It also cannot store every piece of information you feed it. It won’t store short-lived or trivial facts about yourself, sensitive personal information (such as sexual orientation or a detailed political stance), or anything that can identify you, such as phone numbers and IDs. It will store information like your interests, personal context, preferences, and long-term projects. Memories saved are also split into two main categories: conversation memory, and persistent memory. It won’t have anything stored in its persistent memory unless you specify it or set it up to do it automatically.
On any AI platform, you should avoid giving out your personal information. Never share sensitive data like your banking information, your work projects, or even some sensitive device information that could be linked to your accounts. ChatGPT’s Privacy Policy states that it will collect information such as account information, user content (what you input as a prompt, including uploaded files), and personal information you provide in communications, including emails, events, and surveys.
All the information you provide is used to improve its services and training, and to comply with legal obligations and terms of service. However, the platform’s Privacy Policy also lists that they may still disclose your data to third parties, including the government and its affiliates. You should also remember that humans can see the data you provide unless you opt out for training purposes. However, they can still technically access some data if deemed necessary under confidentiality obligations (e.g., legal reasons, investigating abuse).