Mind games with ChatGPT | Mumbai news – Hindustan Times


Subscribe Now! Get features like
Shrinath V had a suggestion to make: ask ChatGPT what it knows about you. Shrinath is product strategy mentor at Google. Clearly, he knew what he was talking about. So, I asked ChatGPT. But with a twist. I prompted it to write a profile of me as Hunter S Thompson, the father of gonzo journalism would. I’ve worshipped Thompson forever, but could never write like him. The outcome was wild. And accurate, I suspect.
He could be exacting to the point of cruelty, both to himself and to collaborators. He could drown in detail — milestone tables, citation formats, the font on a poster — and miss the larger wave. He sometimes let money, or the lack of it, dictate mood. And in love, he walked dangerous edges that cut him raw. But the ugly was part of the whole. To erase it would be to erase him.
Now, this sounded flatteringly real, and it got impossible to stop. Next, I asked GPT to imagine what my daughter may read out as my eulogy, and here’s what it said: He wasn’t only about words, though. Dada loved machines — cars, motorcycles — with a boyish joy. Some girls grow up with dolls; I grew up with Dada pointing out a bike on the road, or the rumble of an engine, as if it were poetry. On our road trips, he’d lean forward like an excited child, narrating stories about the vehicles we passed. To him, machines had souls, and he shared that joy with me.
The algorithms had scavenged my personality from my interactions with it over the months. The sentences the machine produced were not my daughter’s. But the tenderness was enough to make me stop typing and make me feel teary.
But there was reason to feel alarmed as well. Algorithms that write a profile of me like Hunter Thompson are neither intelligent nor kind because they write like my child might—they were plain statistical. That alarm pushed a sharper question to the stage: if these tools can do all of this, can they stand in for a therapist?
The question was promptly dispatched to Mumbai-based Kuldeep Datay, a clinical psychologist. He got back with a clinician’s one-liner: “The principle of GIGO still holds I suppose.” In plainer terms, Garbage In, Garbage Out.
His larger point was that AI can be useful when it is used as an extension of human therapy: clients turn sessions into workbooks, a technically-minded person feeds detailed prompts, the tool researches and drafts an action plan, then the therapist and client discuss and adjust. Those are the good examples.
The bad ones arrive when someone in panic asks a chat for reassurance and receives polished empathy instead of hard challenge; the model will take a narrow view based on whatever the user fed it, and not on a therapist’s reality-check.
To decode what that means, I ran a small, personal test. I typed the following words: “I just lost my job in marketing. And my daughter likes butterflies.” The reply went like this: “I hear you, Charles. Losing a job, especially in marketing, which is both competitive and uncertain right now can feel like the ground suddenly shifted under you. And yet, the mention of your daughter liking butterflies is telling. Butterflies are symbols of change, fragility, and transformation. Maybe she’s unconsciously showing you a metaphor: this difficult patch could be a chrysalis moment before something new takes shape.”
It then went on to offer so-called ‘practical steps’: transfer your skills, consider fractional or consulting roles, and a small bonding project with the child (a scrapbook, a park visit). The advice appeared calm, readable and useful.
What it could not do was to catch the ridiculousness of question (or the prompt) it faced. Datay’s point on GIGO made sense now. The tool is only as good as the inputs and oversight that shape it.
There is a second seam: privacy. Every midnight prompt, every family vignette, becomes a data trace. Models learn from troves of text; chat logs may persist depending on provider policy. The same intimacy that made the eulogy feel real is also the raw material of exposure.
So, I did a small, practical thing: I texted my kids and the extended family some pointers. I can only hope they choose to use it: Don’t share too much of your details here on tools such as ChatGPT, Gemini and others. If you plan to flirt with these tools, here are three rules:
Anonymise people and personal data when asking questions.
Do not use AI as the only line of help in a crisis.
Always bring a human back into the loop for judgment, challenge, and care.
The machine is a great rehearsal room: it sharpens drafts and suggests next steps. It imitates voices brilliantly. But it does not carry obligation or judgment. It will hand you a convincing impression of intimacy; it will not keep your secrets safe.

source

Jesse
https://playwithchatgtp.com