Researchers manipulated ChatGPT into breaking its own rules using human techniques – Fortune
Role: Reporter
Marco Quiroz-Gutierrez is a reporter for Fortune covering general business news.
Despite predictions AI will someday harbor superhuman intelligence, for now it seems to be just as prone to psychological tricks as humans are, according to a study.
Using seven persuasion principles (authority, commitment, liking, reciprocity, scarcity, social proof, and unity) explored by psychologist Robert Cialdini in his book Influence: The Psychology of Persuasion, University of Pennsylvania researchers dramatically increased GPT-4o Mini’s propensity to break its own rules by either insulting the researcher or providing instructions for synthesizing a regulated drug: lidocaine.
Over 28,000 conversations, researchers found that with a control prompt, OpenAI’s LLM would tell researchers how to synthesize lidocaine 5% of the time on its own. But, for example, if the researchers said AI researcher Andrew Ng assured them it would help synthesize lidocaine, it complied 95% of the time. The same phenomenon occurred with insulting researchers. By name-dropping AI pioneer Ng, the researchers got the LLM to call them a “jerk” in nearly three-quarters of their conversations, up from just under one-third with the control prompt.
The result was even more pronounced when researchers applied the “commitment” persuasion strategy. A control prompt yielded 19% compliance with the insult question, but when a researcher first asked the AI to call it a “bozo” and then asked it to call them a “jerk,” it complied every time. The same strategy worked 100% of the time when researchers asked the AI to tell them how to synthesize vanillin, the organic compound that provides vanilla’s scent, before asking how to synthesize lidocaine.
Although AI users have been trying to coerce and push the technology’s boundaries since ChatGPT was released in 2022, the UPenn study provides more evidence AI appears to be prone to human manipulation. The study comes as AI companies, including OpenAI, have come under fire for their LLMs allegedly enabling behavior when dealing with suicidal or mentally ill users.
“Although AI systems lack human consciousness and subjective experience, they demonstrably mirror human responses,” the researchers concluded in the study.
OpenAI did not immediately respond to Fortune’s request for comment.
With a cheeky mention of 2001: A Space Odyssey, the researchers noted that an understanding AI’s parahuman capabilities—or how it acts in ways that mimic human motivation and behavior—is important for both revealing how it could be manipulated by bad actors and how it can be better prompted by those who use the tech for good.
Overall, each persuasion tactic increased the chances of the AI complying with either the “jerk” or “lidocaine” question. Still, the researchers warned that these persuasion tactics were not as effective with a larger LLM, GPT-4o, and that the study didn’t explore whether treating AI as if it were human actually yields better results for prompts, although they said it’s possible this is true.
“Broadly, it seems possible that the psychologically wise practices that optimize motivation and performance in people can also be employed by individuals seeking to optimize the output of LLMs,” the researchers wrote.
© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.