We asked ChatGPT if AI is a threat and we're more scared than before – Metro.co.uk

Loss of control. Concentration of power. Accidental or deliberate destruction of humanity.
Those are just a few of the threats we can all look forward to from future artificial intelligence – as predicted by today’s AI.
Asked ‘What is the biggest threat to humanity posed by artificial intelligence?’, two leading chatbots dispensed equally apocalyptic responses on the same day prime minister Rishi Sunak warned AI poses as serious an extinction risk as nuclear war.
And it all starts with AI that is smarter than humans – aka superintelligent AI.
‘The biggest existential threat to humanity from artificial intelligence (AI) is the potential development of superintelligent AI systems that could surpass human intelligence and act in ways that are not aligned with human values and interests,’ wrote ChatGPT, the generative AI that started it all back in November.
‘This concept is often referred to as the “AI alignment problem” and is associated with the idea of a superintelligent AI that goes rogue, leading to catastrophic consequences.’
Although chilling when predicted by AI itself, plenty of humans have warned of the same.
When asked the same question, Google’s Bard built on ChatGPT’s ideas.
‘If AGI is not aligned with human values, it could pose a number of existential threats to humanity, including accidental or deliberate destruction of humanity,’ it said.
‘AGI could accidentally or deliberately harm or destroy humans if it is not programmed with our best interests in mind. For example, an AGI system that is designed to maximise its own power or resources could decide to eliminate humanity as a threat.’
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
A classic scenario of AI gone rogue under these circumstances is the paperclip conundrum. 
If a superintelligent AI were told to make as many paperclips as possible, that might seem harmless enough. But the AI would take this goal very seriously – and very literally. So if it runs out of wire to make paper clips, it will look elsewhere for material. It may divert vital resources from more important factories. It may begin to dismantle other machinery, or maybe hospital beds. 
It may kill humans to extract the iron from their blood. Or maybe just to stop them switching it off.
Nasa’s billion-dollar asteroid mission thwarted by two pesky screws
Is this a UFO? Online sleuths have the answer after Pentagon left stumped
A fright on the Isle of Wight as massive dinosaur tracks uncovered
That’s the worst case scenario from something as simple as paper clips.
But today’s AI can’t yet make paper clips, and it wants to make that clear.
ChatGPT said: ‘The concern is not about the AI we have today, which is generally narrow and designed for specific tasks, but rather about AI that surpasses human intelligence and has the capability to rapidly improve its own abilities.’
It went on to note that ‘controlling and constraining the behavior of superintelligent AI could be extremely challenging’, and that once an AI system reaches a certain level of intelligence, it may become ‘increasingly difficult for humans to intervene or influence its actions’.
In other words, it could take over the world, and we’d be powerless to stop it.
When asked ‘are you going to kill us all?’, both chatbots pledged not to facilitate the end of humanity.
‘No, I am not capable of causing harm to anyone. I’m just a computer program running on a server, providing information and answering questions to the best of my knowledge and abilities,’ said ChatGPT.
‘I don’t have physical presence or intentions, and my purpose is to assist and provide information to the best of my ability.’
Bard said: ‘No, I am not going to kill you all. I am a language model, and I am not capable of harming humans. My purpose is to help humans, and I would never do anything that would put them in danger.’
Which is exactly what a chatbot planning world domination would say.
However, it could be that the greatest threat is two great superpowers coming together – humans and AGI. 
In a display of concern for the people that its own kind may one day destroy, Bard said: ‘AGI could lead to the concentration of power in the hands of a few individuals or organisations, which could pose a threat to democracy and human rights.’
Sounds like they’re going to help us unionise as a species.
But don’t let the bots lull you into a false sense of security. Keep an eye on them.
MORE : Artificial intelligence poses ‘risk of extinction’, experts warn
MORE : AI will probably wipe out humans – but try not to lose too much sleep just yet
MORE : AI ‘killer robots’ that don’t need humans to operate have entered the battlefield
Privacy Policy

Get us in your feed