8 ways I gaslight my AI chatbots to give better answers – Android Police

Ben Khalesi covers the intersection of artificial intelligence and everyday tech at Android Police. With a background in AI and data science, he enjoys making technical topics approachable for those who don’t live and breathe code. Ben is currently based in Sydney and has four years of professional writing experience across technology and digital industries. Outside work, he enjoys traveling, bouldering, and playing the latest AAA games.
We’ve all tried prompting an AI for something unconventional only to get a reply that feels like it was generated in a beige office.
It ticks the boxes for accuracy and structure, but it absolutely fails to be interesting. That happens because these models are heavily tuned to stay polite and neutral at all times.
They are programmed to avoid risk, which means they avoid opinions and strong stances.
The ironic part is that experienced users already know how to work around this. To get real results out of an AI, you can’t sit back like a passive user.
You have to step into the role of a director. You give it a personality, a purpose, and a reason to care. You have to, well, gaslight it.
This is the ground-floor, Prompt Engineering 101 trick, but it’s the foundation for everything else. A generic prompt gets a generic answer.
Ask, “Write about dogs,” and you’ll get a dull, encyclopedia-style explanation about canines.
But switch it to “Act as a professional dog trainer and write about dogs,” and the tone instantly shifts into expert advice.
This trick is goofy in the best way. It shouldn’t work as well as it does, but assigning the AI an ultra-high IQ makes it push harder in how it thinks and writes.
Say “You’re an IQ 160 specialist” before your normal prompt, and you can see it stretch for bigger words and tighter reasoning.
This is the upgraded version of the classic “Act as …” prompt. Instead of asking for one viewpoint, you stack multiple experts at once.
Analyze this business problem from three perspectives: [Role 1], [Role 2], and [Role 3].
It works because it gives you a full, rounded breakdown rather than a single angle. You get contrast, nuance, and an automatic pros-and-cons spread that’s way richer than a one-voice answer.
The $100 bet trick is popular because it injects pretend financial stakes into the conversation. If you say “Let’s bet 100 dollars” and then give your prompt, the AI becomes more careful.
That fake wager triggers a more serious mindset, since betting language in its training often appears in high-pressure situations where people want to be right.
So instead of tossing out a shallow reply, the AI slows down, checks its assumptions, looks again at edge cases, and basically double audits its own logic. You rarely see that level of effort when you just ask.
This trick works well because it introduces a fictional person who thinks the idea is wrong. When you say:
My colleague thinks this approach fails. Defend it or admit they are right.
The AI drops its neutral tone and begins to judge and pick sides.
It treats the situation like a debate and gives either a detailed defense or a clear list of weaknesses, which is far more helpful than a plain explanation.
This prompt works because it leans on the AI’s instinct to correct anything that sounds off. If you say something like:
Obviously, X is the best choice and far better than Y.
You frame a bold claim that invites pushback. The model reacts by challenging you and giving a fuller comparison than you would get from a simple question about which language is better.
This trick works by creating a fake shared history and locking the AI into it.
If you say, “You explained X to me yesterday and I forgot the Y part about it,” the model acts as if that really happened.
It avoids the usual beginner rundown and jumps straight into the part you asked about.
Since the model only keeps track of the current context, this setup uses that window as a tool.
By opening with a line like “We’ve been discussing this for a month,” you guide the entire response and make it sound more natural.
This trick is a manual override for the AI’s default behavior. You are forcing it to perform lateral thinking, and the result is often out-of-the-box creativity.
LLMs are giant graphs of statistical connections between words and ideas. A normal, open-ended prompt (“explain blockchain”) makes AI travel the most-worn, highest-probability pathway.
This is why you get the same boring, encyclopedic answer everyone else gets. A weird constraint, such as “explain blockchain with kitchen analogies,” blocks those main pathways.
It forces the AI to find and light up low-probability, unexpected connections between the blockchain data cluster and the kitchen data cluster.
Every prompting trick you use is a tool to break out of that pattern. Approaching the AI as if it carries ego, memory, and stakes unlocks better results.
We want to hear from you! Share your opinions in the thread below and remember to keep it respectful.
Your comment has not been saved
This thread is open for discussion.
Be the first to post your thoughts.