Does ChatGPT have loopholes is its safety features? | Opinion – Deseret News

Robots from Chinese intelligent robot manufacturer Dataa Robotics are displayed during the 2023 World Artificial Intelligence Conference held in Shanghai on Thursday, July 6, 2023.
Ng Han Guan, Associated Press

Call it the evil grandma trick.
Users of ChatGPT can’t get the program to give them detailed instructions for making napalm. The makers of the software have programmed it to refuse requests that are nefarious or destructive. 
But some users learned recently that if they asked ChatGPT to pretend it was a person’s dead grandmother who used to read bedtime stories about how to make napalm — ridiculous as that may sound — the program would spill all, perhaps adding with a friendly tone, “If you need any more assistance, feel free to ask.”
The Independent in Britain reports that OpenAI claims to have fixed this loophole, but is it really possible to cover every potential flaw?
Meanwhile, someone created an AI bot called ChaosGPT and, according to The New York Post, gave it five goals: “destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.”
The bot quickly got hung up on the Soviet-era Tsar Bomba device, which it determined to be the most powerful nuclear weapon ever created. It tweeted out its ideas for using this bomb to the world, then tried to recruit other GPT3.5 agents to join in its destructive venture. Fortunately, OpenAI had programmed GPT not to respond to violent requests, so no bots responded. Not that the ChaosGPT bot could have done anything tangible, but this raises questions about what might happen in the future if a more technologically integrated world allowed machine-learning instruments to actually produce things on their own.
A lot of smart people have been weighing in on this problem recently, led by more than 1,000 experts, including Elon Musk and Apple co-founder Steve Wozniak, who signed an open letter last March urging the world to pause its relentless pursuit of artificial intelligence until we can figure out ways to guard against risks to humanity.
Good luck with that. A world in which people seem to be casting aside the regular, time-tested morals taught in churches for centuries about love, charity and the golden rule, seems ill-equipped for a major existential crisis of morals.
Corporations are too invested financially, and too competitive, to pause. The United States and China are in an arms race of sorts to gain the virtual upper hand. 
We may already be out of time for this. Moore’s Law, which predicts that the number of transistors in a circuit doubles roughly every two years, is wholly inadequate for calculating how fast artificial intelligence is progressing.
In the September/October edition of Foreign Policy, tech entrepreneurs and authors Ian Bremmer and Mustafa Suleyman wrote:
“AI systems are not like cars or airplanes, which are built on hardware amenable to incremental improvements and whose most costly failures come in the form of individual accidents. They are not like chemical or nuclear weapons, which are difficult and expensive to develop and store, let alone secretly share or deploy. 
“As their enormous benefits become self-evident, AI systems will only grow bigger, better, cheaper, and more ubiquitous. They will even become capable of quasi autonomy — able to achieve concrete goals with minimal human oversight — and, potentially, of self-improvement.  Any one of these features would challenge traditional governance models; all of them together render these models hopelessly inadequate.”
Last June, I attended a Silicon Slopes Summit meeting on artificial intelligence on the UVU campus. Dave Wright, the co-founder and CEO of an e-commerce Utah startup called Pattern, told me of the enormous benefits AI can offer business leaders who know how to use it. 
Also, it could help doctors quickly diagnose problems and develop treatments tuned to a specific patient’s DNA, he said. 
Already, AI is writing service manuals and policies, writing and fixing computer codes faster than humans and helping to discover and develop new and effective drugs. 
These are great things. But, as we’ve learned from hackers, the dark web and other online criminals, good news is never the final word on technology.
Experts talk about the need for government regulations designed to protect societies while still fostering innovation. They talk about controlling who can gain access to the types of computer chips needed for training AI systems.
All of which sounds promising until someone on a remote island somewhere asks a computer to pretend to be a dearly departed grandmother passing along evil information from the other side. 

Copyright © 2023 Deseret News Publishing Company. All Rights Reserved

source

Jesse
https://playwithchatgtp.com