Wednesday, November 08, 2023

I Suppose This Had To Happen – Was Just A Matter of Time!

This appeared last week:

Meet hackers’ favourite new tool: WormGPT

By David Swan

November 3, 2023 — 5.00am

Cybersecurity researchers are sounding an alarm about the hacking community’s answer to ChatGPT, a new generative AI tool dubbed WormGPT, which is being used to create sophisticated attacks on Australian businesses.

WormGPT is being described as similar to ChatGPT, but with no ethical boundaries or limitations, and researchers say hundreds of customers have already paid for access to the tool on the dark web.

A 23-year-old Portuguese programmer, “Last”, describes himself as the creator of WormGPT, and pitches it as a piece of technology that “lets you do all sorts of illegal stuff and easily sell it online in the future”.

“Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home,” Last said in an online post on the dark web, in which he sold access to the tool.

While businesses are still excited about the productivity benefits generative AI can bring, industry figures are warning that the new technology is set to unleash a wave of innovative cyberattacks against businesses and individuals.

Patrick Butler, managing partner at Australian cyber firm Tesserent, said that malicious parties were signing up to criminal forums to rent access to WormGPT and using it to craft convincing phishing emails in different languages, which then allowed them to commit identity theft and compromise systems access.

While phishing emails were often characterised by poor spelling or grammar, generative AI could create emails with impeccable English, Butler said, and tools such as WormGPT could be used by attackers with limited technical skills.

“We’re seeing malicious generative AI being used to create new malware variants that are more difficult for some traditional tools to detect,” Butler said. “These platforms can even assist criminals in exploiting published vulnerabilities.

“While some legitimate AI tools can be used to conduct software code reviews, developers should be discouraged from doing this as their code may be used to train AI models that criminals gain access to, giving them further intelligence into organisational systems.”

Butler said the number of different threat actors would likely escalate as generative AI made it easier for criminals to access cyberattack tools. He said the Tesserent Security Operations Centre had already found an increase in phishing campaigns and malicious email activities targeting Australian organisations, particularly in the months following the emergence of WormGPT and similar tools.

There are now at least six different generative AI tools available to rent or purchase on the dark web, including FraudGPT, EvilGPT, DarkBard, WolfGPT, XXXGPT and WormGPT with more appearing, according to Butler.

“While most lack the large capacity of public-facing tools like ChatGPT and Bard, they are proliferating quickly, which can make them harder to find and take down.”

Scott Jarkoff, director of intelligence strategy, APJ & META, at CrowdStrike, said cybersecurity activity had risen amid the conflict in the Middle East, meaning businesses should be even more vigilant than usual.

He said hacking groups from the so-called “big four” of Russia, China, North Korea and Iran had been using generative AI tools to craft attacks in perfect English.

“The Israel-Hamas conflict is now giving criminals a perfect lure to say ‘hey, visit this site to donate to whichever cause you believe in’, and that means it’s now more important that everyone takes cybersecurity more seriously,” he said.

“We all take safety seriously, why do we not take cyber seriously? We’ve got to get to a point where cyber hygiene is built into everyone’s muscle memory, just as safety is built into everyone’s muscle memory.”

Generative AI is not only being used to create realistic phishing emails. It’s also supercharging social engineering, with bad actors using AI to create realistic fake accounts to spread misinformation, according to Dan Schiappa, chief product officer at cyber vendor Arctic Wolf.

More here:

https://www.smh.com.au/technology/meet-hackers-favourite-new-tool-wormgpt-20231102-p5eh5l.html

Inevitable but pretty sad I reckon. There is always someone around to spoil the party!

The prospect of grammatically perfect phishing e-mails from Iran of all places is just too horrible to contemplate!

Important we all know about the threat – and be bloody careful out there!

David.

 

No comments:

Post a Comment