The Rise of AI-Powered Phishing Tools
(Photo : The Rise of AI-Powered Phishing Tools)

Poorly written emails riddled with typos and nonsensical sentences used to be the first sign of phishing attacks.

Email filters would send them straight to the spam.

Even if they did reach the main inbox, an employee that passed general phishing awareness training could detect that something is off.

But once ChatGPT was made available for general use, everything changed.

It didn't take long until scammers started using the tool for text-based phishing attacks.

The distinction between genuine emails and phishing is now quite murky.

While the language bot refuses to generate scam emails and warns against phishing being illegal and unethical, bad actors soon discovered that it's easy to bypass these restrictions.

They avoid words that blatantly state the malicious nature of the email or they claim it's for educational purposes.

ChatGPT wasn't designed for malicious intent, but it has been misused for phishing campaigns since its release.

Now, the dark web has spawned sinister versions of large language models. Known as FraudGPT and WormGPT, these chatbots are tailor-made for cyber criminal purposes such as writing malware-infected sites and perfectly crafted phishing emails.

Should you worry about AI-based cyber crime that is now possible with ChatGPT's malicious cousins?

WormGPT Used for Business Email Compromise

In July 2023, the new language processing tool WormGPT was released on the web. While ChatGPT comes with various restrictions and preprogrammed ethics, WormGPT does not.

This chatbot is based on a different language processing model than ChatGPT - an older one known as GPT-J that was released to the public back in 2021.

The key difference between it and ChatGPT is that it lacks guidelines that make it refuse tasks that could lead to criminal activity. Users can ask the chat hacking questions as well as request it to generate a phishing email, and it will comply.

Another main difference is that it's built for nefarious purposes. The chatbot was trained using data that is related to cybercrime (particularly malware).

For example, WormGPT can be used for a kind of phishing attack known as a business email compromise (BEC). Scammers write them to obtain sensitive data or force the recipient to make money transfers.

BEC usually imitates someone within the company in question to convince another person to make a transfer to a fraudulent bank account.

However, the creators claim that it can be used to reverse engineer phishing emails, i.e. to recognize a possible BEC-like scam.

WormGPT is available in most programming and human languages.

Unlike ChatGPT, it doesn't have a free version available, and you have to pay for it with crypto.

How Well Does WormGPT Work?

Users report that this malicious chatbot, based on the older language model, doesn't perform as well as ChatGPT. It has difficulty completing simple tasks, and it's prone to crashing.

Even the phishing emails it writes are generic and not as convincing.

After WormGPT, researchers uncovered another malicious language model called FraudGPT. 

Is that one a cause of concern?

FraudGPT Programmed For Writing Phishing Websites

Not a lot is known about the cybercriminal tool FraudGPT. It's supposedly based on the newer version of the language model (the GPT3 variant) and, with this, offers more complex features for criminals.

As such, it can be used for more advanced phishing schemes.

How?

Phishing emails often include attachments and links that are infected with malware. After clicking on a link, the victim lands on the phishing site.

FraudGPT allows even those that lack hacking skills to craft spoofed websites that are indistinguishable from genuine sites.

With that, the program introduces phishing as a service - making this type of criminal activity available to the masses.

Its creators claim that the FraudGPT is also capable of detecting vulnerabilities that can be exploited in someone's code, generating convincing scam messages, and crafting new, undetectable kinds of malware. 

For example, it can help a threat actor to identify websites that have a flaw that enables credit card hacking.

The creation of harmful code that can bypass security solutions means that the malware it generates might go undetected when up against traditional cybersecurity solutions.

Allegedly, the chatbot can find the sources of stolen sensitive data that is sold on the dark web, detect hacking forums, and create more hacking solutions.

But can a single hacking solution that is essentially a language processing model have all of these advertised capabilities?

How Well Does WormGPT Work?

Similar to WormGPT, the phishing emails it writes are generic and not that convincing. They are grammatically correct and sensical but not as original.

Therefore, they might get detected by regular email filters.

Its full functionality is not yet clear because this chatbot is not readily available to everyone.

Have All Cyber Criminals Turned to AI Phishing?

Not quite.

Building AI-powered language models is now possible at a lower cost than ever before - which is why there is a surge of malicious ChatGPT lookalikes on the dark web.

To some extent, they do open doors to more sophisticated attack vectors for individuals that don't have advanced hacking or language skills.

We are about to see more text-based cyber attacks than ever before.

However, while those language models are damaging, they're also still limited and not as sophisticated as many of the headlines will lead you to believe. Also, they can't write a more convincing email than ChatGPT can.

To compare, AI technology is moving at a rapid pace. WormGPT is based on an older language model (GPT-J) that doesn't have all the capabilities and nuances that ChatGPT does.

Although malicious iterations of the popular chatbot are available, a lot of their capabilities are more of a hype than an AI super threat that's coming for unsuspecting businesses.