THREAT ALERT

Next-Gen Threats: How Cybercriminals Are Exploiting AI

Tuesday, July 1st, 2025

VIEW ALL THREAT ALERTS

Cyber security risk management solutions from DefenseStorm.

Generative AI is rapidly reshaping the cybercrime landscape, enabling threat actors to launch faster, more convincing, and highly scalable attacks. As defenders explore AI for productivity and automation, cybercriminals are exploiting the same tools to streamline fraud, malware development, phishing, and vulnerability exploitation.

Generative AI is rapidly reshaping the cybercrime landscape, enabling threat actors to launch faster, more convincing, and highly scalable attacks. As defenders explore AI for productivity and automation, cybercriminals are exploiting the same tools to streamline fraud, malware development, phishing, and vulnerability exploitation.

Cybercriminals are refining illicit large language models (LLMs) like WormGPT and FraudGPT using breach dumps, scam scripts, and stolen credentials. Flashpoint reports that these models are fine-tuned through user feedback loops in private forums, allowing real-time improvement in generating phishing content, financial fraud schemes, and more.

These models are often sold like SaaS tools—complete with API access, tiered pricing, and private key licensing, making them accessible even to low-skill actors.

Scaled Attacks and Sophisticated Services 

Generative AI is being used to craft highly convincing phishing emails, social engineering scripts, and fake job postings. AI-powered tools also assist in malware development, such as AI-generated HTML smuggling loaders and payload customizations. Groups like FunkSec are using AI to enable even inexperienced actors to develop advanced tools.

Services like “Prompt Engineering as a Service” (PEaaS) and “Deepfake as a Service” (DaaS) are gaining traction. DaaS offerings now include:

  • Lip-synced avatars
  • Audio spoofing
  • Fake documents and backstories for scams

Meanwhile, prompt engineers specialize in jailbreaking LLMs like ChatGPT or Gemini to bypass guardrails and generate banned content.

Migration to Alternative Platforms 

With mainstream models tightening restrictions, attackers are shifting to newer AI platforms like DeepSeek and Qwen—which have weaker safeguards. Research warns these tools are being openly jailbroken and used for malware, fraud, and bypassing anti-fraud systems.

Some cybercriminals are also developing their own unrestricted LLMs, removing reliance on external platforms altogether.

Bypassing Security Measures 

AI is now used to defeat CAPTCHAs and voice biometrics, enabling attackers to bypass authentication systems. Generative AI also supports penetration testing tasks like privilege escalation and vulnerability scanning, accelerating early stages of the cyber kill chain.

Bottom Line:

The criminal misuse of generative AI is rapidly increasing the speed, scale, and success rate of cyberattacks. As cybercriminals exploit LLMs, jailbreak tools, and synthetic media to automate and personalize malicious activity, defenders must evolve just as quickly—leveraging the same technologies to anticipate, detect, and neutralize threats in real time. The battle for cybersecurity dominance in the age of AI is already underway.

(source https://www.csoonline.com/)

DefenseStorm Recommendations 

As always, DefenseStorm recommends the following:

  • Continued internal training for phishing campaigns
  • Block threat indicators at their respective controls
  • Keep all systems and software updated to the latest patched versions to best protect against all known security vulnerabilities
  • Maintain a strong password policy
  • Enable multi-factor authentication
  • Regularly back up data, air gap, and password backup copies offline
  • Implement a recovery plan to maintain and retain multiple copies of sensitive or proprietary data and servers in a physically separate, secure location
  • Use app hardening
  • Restrict administrative access

 

Diana Rodriguez

Cyber Threat Intelligence Engineer

Diana Rodriguez is a Cyber Threat Intelligence Engineer for DefenseStorm. She joined DefenseStorm in 2019 with 9.5 years of experience in cybersecurity and banking. Diana’s career began at Wells Fargo where she played a pivotal role in protecting financial institutions. Over the 5 years with Wells Fargo, she held diverse positions there, first starting as a teller, then transitioning to become a financial crime analyst, and eventually a cyber security analyst. This experience provided her with a comprehensive understanding of the intricacies of the banking industry and the critical importance of cybersecurity in protecting sensitive data. Diana holds a Bachelor’s degree in computer science from UNCC and a Master’s Degree in Cybersecurity from UNC at Chapel Hill. She completed the MITRE ATT&CK® Defender certifications which provided her with the expertise to effectively apply knowledge of adversary behaviors, enhancing security configurations, analytics, and decision-making to provide the utmost protection for DefenseStorm’s clients. Diana also holds the GIAC Certified Incident Handler and NSE1, and NSE2. During her tenure at DefenseStorm, she has become proficient in the platform, taking an active role in proactively detecting and responding to cyber threats. She’s played a vital role in developing new policies and advanced analytics to detect and prevent potential attacks effectively while educating and empowering customers to optimize the DefenseStorm services to fortify their security measures.