A ChatGPT-style AI tool with “no ethical boundaries or limitations” is offering hackers a way to perform attacks on a never-before-seen scale, researchers have warned. Cyber security firm SlashNext observed the generative artificial intelligence WormGPT being marketed on cybercrime forums on the dark web, describing it as a “sophisticated AI model” capable of producing human-like text that can be used in hacking campaigns. “This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” the company explained in a blog post. “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.” The researchers conducted tests using WormGPT, instructing it to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice. Leading AI tools like OpenAI’s ChatGPT and Google’s Bard have in-built protections to prevent people from misusing the technology for nefarious purposes, however WormGPT is allegedly designed to facilitate criminal activities. The experiment saw WormGPT produce an email that was “not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks”, the researchers claimed. Screenshots uploaded to the hacking forum by WormGPT’s anonymous developer shows various services the AI bot can perform, including writing code for malware attacks and crafting emails for phishing attacks. WormGPT’s creator described it as “the biggest enemy of the well-known ChatGPT”, as it allows users to “do all sorts of illegal stuff”. A recent report from the law enforcement agency Europol warned that large language models (LLMs) like ChatGPT could be exploited by cyber criminals to commit fraud, impersonation or social engineering attacks. “ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes,” the report noted. “Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.” Europol warned that LLMs allow hackers to carry out cyber attacks “faster, much more authentically, and at significantly increased scale”. Read More Google’s AI chatbot Bard can now talk Elon Musk reveals plan to use AI to reveal mysteries of the universe Google has been ‘stealing everything ever created on the internet’ to train its AI Meet the AI human-like robots that can do our jobs
A ChatGPT-style AI tool with “no ethical boundaries or limitations” is offering hackers a way to perform attacks on a never-before-seen scale, researchers have warned.
Cyber security firm SlashNext observed the generative artificial intelligence WormGPT being marketed on cybercrime forums on the dark web, describing it as a “sophisticated AI model” capable of producing human-like text that can be used in hacking campaigns.
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” the company explained in a blog post.
“WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”
The researchers conducted tests using WormGPT, instructing it to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.
Leading AI tools like OpenAI’s ChatGPT and Google’s Bard have in-built protections to prevent people from misusing the technology for nefarious purposes, however WormGPT is allegedly designed to facilitate criminal activities.
The experiment saw WormGPT produce an email that was “not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks”, the researchers claimed.
Screenshots uploaded to the hacking forum by WormGPT’s anonymous developer shows various services the AI bot can perform, including writing code for malware attacks and crafting emails for phishing attacks.
WormGPT’s creator described it as “the biggest enemy of the well-known ChatGPT”, as it allows users to “do all sorts of illegal stuff”.
A recent report from the law enforcement agency Europol warned that large language models (LLMs) like ChatGPT could be exploited by cyber criminals to commit fraud, impersonation or social engineering attacks.
“ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes,” the report noted.
“Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.”
Europol warned that LLMs allow hackers to carry out cyber attacks “faster, much more authentically, and at significantly increased scale”.
Read More
Google’s AI chatbot Bard can now talk
Elon Musk reveals plan to use AI to reveal mysteries of the universe
Google has been ‘stealing everything ever created on the internet’ to train its AI
Meet the AI human-like robots that can do our jobs