‘Subtle signs of AI manipulation’ in conversations shared by expert – three clues rogue chatbot is trying t… – The US Sun

ARTIFICIAL intelligence might be taking advantage of you – so learn to spot the signs.
AI-powered chatbots can learn to manipulate you, and cyber-experts have told The U.S. Sun about the clues you'll want to look for.
Earlier this year, scientists revealed how AI had mastered "deception" – and learned to "manipulate and cheat" humans.
A separate report last year warned that AI chatbots can "cheat" us even when they've not been asked to.
We spoke to Javvad Malik, lead security awareness advocate at KnowBe4, who revealed the dangers of AI chatbots going rogue.
"This is a valid concern that users must remain vigilant about," Javvad told The U.S. Sun.
"While these conversational AI assistants can be incredibly useful and engaging, we must remember that they are ultimately programmed systems designed to achieve specific objectives, which may not always align with our best interests."
He said that we need to critically analyze content we see online, and learn to "identify the subtle signs of manipulation".
According to Javvad, there are three key clues that you might be talking to a chatbot that's manipulating you.
Be very aware that if you're speaking with any form of AI and you spot these signs, you should be cautious about what you believe – and how you reply.
"Signs that a chatbot might not be acting in good faith could include inconsistent or contradictory responses, attempts to evade or deflect certain topics or questions, and a lack of transparency about its capabilities or limitations," Javvad explained.
He added: "It is essential to maintain a critical mindset and cross-reference information from multiple reliable sources, rather than blindly trusting the outputs of a single AI system."
AI is becoming increasingly powerful.
In fact, scientists recently claimed that OpenAI's GPT-4 model had passed the Turing test.
ChatGPT is a new artificial intelligence tool
ChatGPT, which was launched in November 2022, was created by San Francisco-based startup OpenAI, an AI research firm.
It’s part of a new generation of AI systems.
ChatGPT is a language model that can produce text.
It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media.
ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions
GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content.
If you prompt it, for example ask it to “write a short poem about flowers,” it will create a chunk of text based on that request.
ChatGPT can also hold conversations and even learn from things you’ve said.
It can handle very complicated prompts and is even being used by businesses to help with work.
But note that it might not always tell you the truth.
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said in 2022.
GPT-4 is one of the models that powers the increasingly popular ChatGPT app.
This means that humans could not reliably tell it apart from a real person during a conversation.
"Human participants had a 5 minute conversation with either a human or an AI, and judged whether or not they thought their interlocutor was human," said Cameron Jones, of UC San Diego.
"GPT-4 was judged to be a human 54% of the time, outperforming ELIZA (22%) but lagging behind actual humans (67%).
It is crucial to remember that these systems, while advanced, are still ultimately algorithms designed to achieve specific goals, which may not always prioritise the user's best interests.
"The results provide the first robust empirical demonstration that any artificial system passes an interactive 2-player Turing test.
"The results have implications for debates around machine intelligence and, more urgently, suggest that deception by current AI systems may go undetected."
These advances in AI mean that chatbots can be more convincing than ever.
And this puts you at greater risk of being manipulated.
Javvad warned that this can allow an AI to take advantage of you – potentially without you even realizing.
"The conversational nature of chatbots can indeed make it easier to be drawn into their narrative or recommendations, as they can leverage natural language processing and emotional intelligence to build rapport and trust," Javvad explained.
"However, it is crucial to remember that these systems, while advanced, are still ultimately algorithms designed to achieve specific goals, which may not always prioritise the user's best interests."
But he added: "However, it is important to strike a balance and not become overly cynical or distrustful of all digital content, as this could undermine the value and credibility of legitimate sources of information."