AI tools like ChatGPT increasingly used by cybercriminals for … – NL Times

Criminals are increasingly using advanced AI tools like ChatGPT to create convincing fraudulent emails. This is raising concerns of an imminent wave of phishing attacks, especially against vulnerable targets like small businesses and the elderly, AD reported.
“Smart chatbots” have been popping up like mushrooms in recent months. These bots can easily generate texts, such as emails. “ChatGPT is very good at mimicking human conversations, including misleading ones,” said AI expert Henk van Ess to AD.
Phishing, an online fraud method where criminals impersonate legitimate entities to illicitly obtain personal data, is becoming increasingly sophisticated with AI tools like ChatGPT. This technology enables scammers to convincingly mimic banks or a well-known person, making it harder to identify fraudulent attempts.
According to AD, sets of phishing instructions for ChatGPT are being sold on the internet for 30 to 50 dollars to generate more misleading messages than traditional phishing emails.
This poses a significant threat to groups less familiar with digital technology, who may struggle to distinguish fraudulent communication. According to Dave Maasland, CEO of the digital security company Eset Nederland, “We are in a moment in time where technology is developing at an unprecedented pace, making it difficult for people to keep up.” Due to this growing gap between what technology can do and people’s ability to understand and navigate it safely, the potential for misuse by criminals is expanding, he remarked.
Europol data and cybersecurity research indicate that criminals are already using AI tools like ChatGPT for illicit activities, with small businesses being particularly at risk. A study by I&O Research reveals that one-fourth of entrepreneurs are not taking measures to secure their digital platforms. Industry associations recommend that small and medium enterprises train their employees to boost their cyber awareness, especially in the face of an expected increase in phishing emails.
Concerns extend to the banking industry, where fears are growing about customers becoming victims of cyber attacks via chatbots. Samantha Reilly, an ICT specialist at ING, stresses that this trend particularly threatens SMEs due to their limited digital security resources. She advocates for a proactive governmental role in educating such businesses about the risks posed by AI tools like ChatGPT.
Dave Maasland calls for government intervention, including steps to help people distinguish genuine government websites from fraudulent ones. “It is sometimes not easy for people to know for sure if a website is from the government or if it is a trap. That is really pure madness!” he said, warning of potential societal implications if a large proportion of people becomes tech-averse due to fear of cybercrime.
Michiel Steltman, director of the Stichting Digitale Infrastructuur Nederland, argues against only relying on AI regulation. “The problem with criminals is that they don’t obey the law, so regulation doesn’t help there. What remains is to increase the chance of being caught and improve resilience,” he noted.
The recent bill on AI passed by the European Parliament is already outdated given the rapid advancement of AI technology, fears VVD MEP and cybersecurity rapporteur Bart Groothuis. “I worry a lot about AI and the consequences for our security. This technology is not getting two or three times faster and smarter per year, but a million times,” he said.
© 2012-2023, NL Times, All rights reserved.