Support & Downloads

Quisque actraqum nunc no dolor sit ametaugue dolor. Lorem ipsum dolor sit amet, consyect etur adipiscing elit.

s f

Contact Info
198 West 21th Street, Suite 721
New York, NY 10010
youremail@yourdomain.com
+88 (0) 101 0000 000
Follow Us

SecureOps

FraudGPT and WormGPT are AI-driven Tools that Help Attackers Conduct Phishing Campaigns

‘FraudGPT’ Malicious Chatbot Now for Sale on Dark Web

Before we get started with “FraudGPT” and “WormGPT,” we have written several posts on the threat of AI-driven attacks, which have been several of our most downloaded and read blog posts. Our goal is to continue with this AI-driven series so you can track the evolution of AI-driven malware and the solutions we are documenting along the way. We highly recommend reading the following three posts along with the post we are creating today.

 

The Use of Artificial Intelligence in Cyber Attacks and Cyber Defense

https://secureops.com/blog/ai-offense-defense/

 

Four Artificial Intelligence Threats Will Challenge the Cybersecurity Industry

https://secureops.com/blog/ai-generated-attacks/

 

ChatGPT-3 and now ChatGPT-4 — What Does it Mean for Cybersecurity?

https://secureops.com/blog/cti-chatgpt-4/

 

Threat actors riding on the popularity of ChatGPT have launched yet another copycat hacker tool that offers similar chatbot services to the real generative AI-based app but is aimed specifically at promoting malicious activity. Researchers have found ads posted on the Dark Web for an AI-driven hacker tool dubbed “FraudGPT,” which is sold on a subscription basis and has been circulating on Telegram since Saturday, researchers from Netenrich revealed in a post published July 25.

FraudGPT starts at $200 per month and goes up to $1,700 per year, and it’s aimed at helping hackers conduct their nefarious business with the help of AI. The actor claims to have over 3,000 confirmed sales and reviews for FraudGPT. Another similar, AI-driven hacker tool, WormGPT, has been in circulation since July 13 and was outlined in detail in a report by SlashNext. Like ChatGPT, these emerging adversarial AI tools are also based on models trained on large data sources, and they can generate human-like text based on the input they receive.

 

How click fraud works

 

The tools “appear to be among the first inclinations that threat actors are building generative AI features into their tooling,” John Bambenek, principal threat hunter at Netenrich, a cloud data analytics security company in Plano, Texas. “Before this, our discussion of the threat landscape has been theoretical.” FraudGPT — which in ads is touted as a “bot without limitations, rules, and boundaries” — is sold by a threat actor who claims to be a verified vendor on various underground Dark Web marketplaces, including Empire, WHM, Torrez, World, AlphaBay, and Versus.

 

Cybercriminals Now Armed with AI Chatbots Both WormGPT and FraudGPT

These AI-driven tools can help attackers use AI to their advantage when crafting phishing campaigns, generating messages aimed at pressuring victims into falling for business email compromise (BEC) and other email-based scams. FraudGPT also can help threat actors do a slew of other bad things, such as writing malicious code, creating undetectable malware, finding non-VBV bins, creating phishing pages, building hacking tools, finding hacking groups, sites, and markets, writing scam pages and letters; finding leaks and vulnerabilities; and learning to code or hack. Even so, it does appear that helping attackers create convincing phishing campaigns is still one of the main use cases for a tool like FraudGPT, according to Netenrich. The tool’s proficiency at this was even touted in promotional material that appeared on the Dark Web, demonstrating how FraudGPT can produce a draft email that will entice recipients to click on the supplied malicious link.

Jailbreaking ChatGPT’s Ethical Guardrails While ChatGPT can also be exploited as a hacker tool to write socially engineered emails, there are ethical safeguards that limit this use. However, the growing prevalence of AI-driven tools like WormGPT and FraudGPT demonstrates that it isn’t difficult to re-implement the same technology without those safeguards.

FraudGPT and WormGPT are yet more evidence of what one security expert calls “generative AI jailbreaking for dummies,” in which bad actors are misusing generative AI apps to bypass ethical guardrails for generative AI that OpenAI has actively been combatting — a battle that’s been mostly uphill. “It’s been an ongoing struggle,” says Pyry Avist, co-founder and CTO at Hoxhunt. “Rules are created, rules are broken, new rules are created, those rules are broken, and on and on.” While one can’t “just tell ChatGPT to create a convincing phishing email and credential harvesting template sent from your CEO,” someone “can pretend to be the CEO and easily draft an urgent email to the finance team demanding them to alter an invoice payment,” he says.

 

Defending Against AI-Enabled Cyber Threats 

Indeed, generative AI tools across the board provide criminals with the same core functions that they provide technology professionals. For example, with the ability to operate at greater speed and scale, attackers can now generate phishing campaigns quickly and launch more simultaneously. As phishing remains one of the primary ways cyber attackers gain initial entry into an enterprise system to conduct further malicious activity, it’s essential to implement conventional security protections against it. These defenses can still detect AI-enabled phishing and, more importantly, subsequent actions by the threat actor.

“Fundamentally, this doesn’t change the dynamics of what a phishing campaign is, nor the context in which it operates,” John Bambenek, principal threat hunter at Netenrich, says. “As long as you aren’t dealing with phishing from a compromised account, reputational systems can still detect phishing from inauthentic senders, i.e., typosquatted domains, invoices from free Web email accounts, etc.” He says that implementing a defense-in-depth strategy with all the security telemetry available for fast analytics can also help organizations identify a phishing attack before attackers compromise a victim and move on to the next phase of an attack.

“Defenders don’t need to detect every single thing an attacker does in a threat chain; they just have to detect something before the final stages of an attack — that is, ransomware or data exfiltration — so having a strong security data analytics program is essential,” Bambenek says. Other security professionals also promote using AI-based security tools, the numbers of which are growing, to fight adversarial AI, in effect fighting fire with fire to combat the increased sophistication of the threat landscape.

 

Conclusion

AI isn’t just employed in phishing and impersonation schemes. AI is currently used to develop malware, locate targets and weaknesses, disseminate false information, and execute attacks with a high level of intelligence. There are increasing reports of con artists using AI to carry out sophisticated attacks. They create voice clones, pose as real individuals, and conduct highly targeted phishing attempts. In China, a hacker used AI to generate a deepfake video, impersonating the victim’s acquaintance and convincing them to send money. Additionally, con artists have abused client identification procedures on crypto exchanges like Binance using deepfakes. AI is also utilized to develop malware, identify vulnerabilities, spread false information, and execute intelligent attacks. These examples highlight the alarming threat posed by artificial intelligence-driven attacks.

Due to the predictability of human behavior, exploiting human vices, habits, and choices becomes relatively simple. Even with sophisticated malware, hackers still require access to an individual’s mindset, which is where phishing comes into play.

Hence, the pressing question is, “How can businesses safeguard themselves against the increasing threat brought by AI?”

The answer lies in implementing a comprehensive security strategy that transcends conventional cybersecurity measures and acknowledges the human factor.

Let’s pause the discussion concerning identifying and stopping AI-driven attacks in this blog post. In the next blog post, we will pick up where we are leaving off here. We have identified the challenge; next, we’ll provide the cyber-defense solution.