DEFENSESTORM BLOG

When the Voice on the Phone Isn’t Human: How Banks and Credit Unions Can Detect AI-Powered Social Engineering Before It Becomes a Wire Transfer

Monday, May 4th, 2026

VIEW ALL INSIGHTS

Cyber security risk management solutions from DefenseStorm.

Last week, I hosted a webinar with Roger Grimes of KnowBe4 on AI-powered social engineering and deepfakes. His most uncomfortable takeaway was also the simplest: social engineering is already involved in most successful breaches, and AI is making the attacks that work even better.

One number stayed with me: 24%. In a recent AI versus human phishing simulation contest, an autonomous AI social engineer was 24% more successful than a trained human at extracting information from real targets. If you are a criminal, the economics are obvious. The tools are cheap, convincing, and getting better fast.

For community banks and credit unions, the attack surface is no longer just the inbox. It’s the phone call, the Teams meeting, the voicemail, and any moment when a trusted voice asks someone to act quickly.

Roger walked us through the February 2024 Hong Kong case, where an employee transferred $25 million after a Zoom meeting with a “CFO” and “colleagues” who were all pre-recorded deepfakes. The technology that pulled that off two years ago required planning and a skilled operator. Today, similar attacks can run in real time for a few dollars a month.

Training Matters. It Is Not Enough.

Security awareness training matters. Call-back procedures matter. Roger has spent his career building both, and he was clear about their limits.

When a deepfake sounds real, arrives with urgency, and appears to come from someone trusted, some employees will be deceived.

That means the question is no longer only, “Can we stop the deception?” It is also, “What will we detect after it works?”

What Happens After the Deepfake Succeeds

Every successful social engineering attack eventually has to become system-level activity. That activity leaves traces. A SOC tuned to your institution’s normal operating pattern can detect the abnormal:

Anomalous after-hours access. Logins at unusual times, from unusual devices, or outside the user’s normal pattern.

Atypical authorization patterns. Wire transfers or ACH batches that fall outside normal workflows, amounts, counterparties, or dual-control procedures.

Credential deviations. Valid credentials used from unfamiliar geographies, devices, or IP ranges.

Behavioral outliers in privileged accounts. Activity that does not match the account’s established baseline, even when the credentials check out.

The defense is not trying to perfectly identify every fake voice before an employee reacts. The defense is detecting the activity that follows, before the attacker reaches the objective.

Why Banking-Specific Behavioral Baselines Matter

Generic MDR providers monitor for universal threat indicators: lateral movement, privilege escalation, known malware signatures. Those signals matter, but they do not always capture what is abnormal for a specific financial institution.

A community bank’s or credit union’s normal wire activity, core banking access, vendor connections, and privileged-user behavior look different from a Fortune 500 enterprise. They also look different from the institution across town.

That context is where banking-specific detection matters.

DefenseStorm’s GRID Active intelligent data engine processes 5M+ events per institution per day and builds behavioral baselines specific to each financial institution’s environment. When post-social-engineering activity deviates from those baselines, the detection fires because something in your institution’s normal operating pattern changed. Generic threat signatures don’t make that call.

The Collaborative SOC adds human banking expertise. Analysts who understand financial institution operations can tell the difference between a legitimate after-hours wire transfer and an anomalous one triggered by a deepfake impersonation. That context, applied within 90 seconds on critical cases, is what turns detection into response.

A Practical Approach to Post-Social-Engineering Detection

For FIs strengthening their posture against AI-powered social engineering, we recommend taking the following steps:

  1. Baseline high-risk workflows. Document normal activity for wires, ACH, privileged access, vendor approvals, and core banking administration.
  2. Monitor post-compromise behavior. Look for downstream activity after social engineering succeeds, not just initial intrusion indicators.
  3. Tabletop deepfake scenarios. Assume the employee was deceived, then test what your monitoring catches and how quickly your team responds.
  4. Give the SOC banking context. Analysts need to understand what normal looks like for your institution, not just what suspicious looks like in general.
  5. Prepare examiner-ready evidence. Document baselines, detection rules, response procedures, and tabletop results.

Roger ended the webinar with a point worth remembering. For the first time in his 40-year career, defenders are seeing the technology early enough to build with it and against it.

AI-powered social engineering will keep getting better. Banks and credit unions will not prevent every deception. But they can still detect what happens next.

The institutions that invest now in banking-specific behavioral baselines, MDR response, and SOC context will be better positioned to see the attacks others miss.

Concerned about your institution’s detection posture against AI-powered social engineering? Schedule a conversation to see how DefenseStorm’s Collaborative SOC and banking-specific behavioral baselines detect the downstream access anomalies that follow a successful attack.

Frequently Asked Questions

What is AI-powered social engineering?

AI-powered social engineering uses generative AI to impersonate trusted people through deepfake voice, video, or text. Attackers use these tools to create urgency, build credibility, and convince employees to share information, approve access, or authorize transactions.

Can security awareness training stop deepfake attacks?

Training reduces risk, but it cannot eliminate it. When a deepfake sounds real and the request appears to come from someone trusted, some employees will be deceived. Detection and response need to assume that some attacks will get through.

How does DefenseStorm detect AI-powered social engineering?

DefenseStorm detects the downstream access anomalies that follow a successful social engineering attack. The GRID Active intelligent data engine builds behavioral baselines specific to each institution, and the Collaborative SOC applies banking expertise to distinguish suspicious activity from legitimate operations within minutes.

Gina Hortatsos image

Gina Hortatsos

Chief Growth Officer

Gina is a seasoned marketing and operations leader with 27 years of experience in enterprise marketing, strategy, and community building. She brings deep expertise in building, optimizing, and scaling high-performing revenue marketing teams in enterprise tech companies. Gina is also an independent board member at Casted, a post series A (VC-backed, High Alpha & Revolution) martech start up.

Most recently, Gina led the Marketing & Community team at HackerOne, where she oversaw pipeline expansion into the enterprise market, improved brand reach and awareness, and the optimization of customer, partner, and hacker community efforts through AI-led efficiency and effectiveness gains. Prior to HackerOne, Gina was CMO at LogicGate, helping grow the company revenue 6X in 3 years and a series C fundraise. Prior to that, Gina held Marketing leadership positions at FourKites, Hyland, SAP, Oracle, and Hyperion.

Gina holds an MBA in Marketing from Pepperdine University in Malibu, CA and a BA in Psychology from the University of Illinois at Urbana-Champaign.