THREAT ALERT
Tuesday, July 1st, 2025
Generative AI is rapidly reshaping the cybercrime landscape, enabling threat actors to launch faster, more convincing, and highly scalable attacks. As defenders explore AI for productivity and automation, cybercriminals are exploiting the same tools to streamline fraud, malware development, phishing, and vulnerability exploitation.
Generative AI is rapidly reshaping the cybercrime landscape, enabling threat actors to launch faster, more convincing, and highly scalable attacks. As defenders explore AI for productivity and automation, cybercriminals are exploiting the same tools to streamline fraud, malware development, phishing, and vulnerability exploitation.
Cybercriminals are refining illicit large language models (LLMs) like WormGPT and FraudGPT using breach dumps, scam scripts, and stolen credentials. Flashpoint reports that these models are fine-tuned through user feedback loops in private forums, allowing real-time improvement in generating phishing content, financial fraud schemes, and more.
These models are often sold like SaaS tools—complete with API access, tiered pricing, and private key licensing, making them accessible even to low-skill actors.
Scaled Attacks and Sophisticated Services
Generative AI is being used to craft highly convincing phishing emails, social engineering scripts, and fake job postings. AI-powered tools also assist in malware development, such as AI-generated HTML smuggling loaders and payload customizations. Groups like FunkSec are using AI to enable even inexperienced actors to develop advanced tools.
Services like “Prompt Engineering as a Service” (PEaaS) and “Deepfake as a Service” (DaaS) are gaining traction. DaaS offerings now include:
Meanwhile, prompt engineers specialize in jailbreaking LLMs like ChatGPT or Gemini to bypass guardrails and generate banned content.
Migration to Alternative Platforms
With mainstream models tightening restrictions, attackers are shifting to newer AI platforms like DeepSeek and Qwen—which have weaker safeguards. Research warns these tools are being openly jailbroken and used for malware, fraud, and bypassing anti-fraud systems.
Some cybercriminals are also developing their own unrestricted LLMs, removing reliance on external platforms altogether.
Bypassing Security Measures
AI is now used to defeat CAPTCHAs and voice biometrics, enabling attackers to bypass authentication systems. Generative AI also supports penetration testing tasks like privilege escalation and vulnerability scanning, accelerating early stages of the cyber kill chain.
Bottom Line:
The criminal misuse of generative AI is rapidly increasing the speed, scale, and success rate of cyberattacks. As cybercriminals exploit LLMs, jailbreak tools, and synthetic media to automate and personalize malicious activity, defenders must evolve just as quickly—leveraging the same technologies to anticipate, detect, and neutralize threats in real time. The battle for cybersecurity dominance in the age of AI is already underway.
(source https://www.csoonline.com/)
DefenseStorm Recommendations
As always, DefenseStorm recommends the following: