Deepfake technology — synthetic media that uses AI to convincingly alter video, audio, or images — now fuels some of the sharpest social engineering threats organisations face. Criminals clone executive voices and faces to issue urgent requests via Zoom, phone, or email, leading to fraud and data leaks at scale. This article examines the evidence, the regulatory landscape, and the evidence-based defences that actually work — centred on people, not just technology.
The Scale of the Threat — By the Numbers
The deepfake threat is no longer emerging — it has arrived. The data from 2025 is unambiguous about both the scale and the acceleration.
In Australia, the ACSC's Annual Cyber Threat Report 2024–2025 confirms AI is supercharging attacks. Phishing laced with deepfakes featured in 60% of cyber incidents responded to. Large Australian businesses saw cyber costs rise 219% to over $200,000, with AI-driven impersonation topping the list of new threats — and 51% of organisations encountering such attacks in the past year.
These threats extend beyond financial loss, eroding trust in digital communications and damaging reputations. Deepfakes can manipulate stock prices through fake executive announcements or sabotage supply chains by impersonating vendors. As AI tools become more accessible, even smaller organisations are now at risk.
Real-World Case Studies — What Has Already Happened
These are not hypothetical scenarios. The incidents below are documented cases from 2025, illustrating the tangible impact of deepfake impersonation on organisations that believed their controls were sufficient.
In Australia, while specific corporate cases are less publicised, the ACSC reports rising AI-enhanced phishing in finance and government sectors, where deepfakes have increasingly facilitated credential theft and payment fraud.
The Regulatory Landscape — Australia and Global
Regulatory responses to deepfake misuse are accelerating, but enforcement still lags the threat. Organisations must align with these evolving frameworks now rather than waiting for mandatory requirements to crystallise.
Regulatory alignment is a floor, not a ceiling. Organisations that treat compliance as their target are already behind the threat — the evidence demands a proactive, people-first approach that goes well beyond minimum obligations.
Building a Robust Cybersecurity Culture — The Four Pillars
The foundation of deepfake defence lies in people. Technology helps, but the research is clear: organisations with proactive, people-first programs cut successful breaches by up to 65% (Experian, 2025). Effective defence requires four interlocking pillars.
What Good Deepfake Awareness Training Looks Like
Generic cybersecurity training does not prepare employees for deepfake threats. Effective awareness programs in this area share specific characteristics that distinguish them from standard compliance modules.
- Deepfake-specific indicators — training that teaches employees to spot unnatural facial movements, audio glitches, inconsistent lighting, lip-sync drift, and background noise anomalies
- Scenario-based simulations — employees practise spotting fakes in emails, calls, and videos through live drills that mirror real attack patterns
- Executive-focused content — senior leaders are disproportionately targeted; their training must reflect the specific vectors and urgency cues used against them
- Verification protocols — clear, memorable procedures for verifying unexpected requests through a second channel, regardless of how convincing the first appears
- Ongoing reinforcement — threat patterns evolve rapidly; annual training is insufficient — quarterly simulation and refreshers are the minimum effective cadence
KPMG's guidance frames effective deepfake defence as turning employees into human firewalls — not through fear, but through genuine competence built over time. That competence only develops through repeated, realistic practice.
Looking Ahead — Deepfake Threats in 2026–2028
The forecasts from leading research firms are unambiguous: deepfake-related threats will accelerate significantly over the next two years. Organisations that build their defences now will be substantially better positioned than those who wait.
Experian forecasts that organisations with proactive, people-first simulations will cut successful breaches by 65%. Deloitte's 2025 Tech Trends report is direct: "The wall isn't tech — it's behaviour."
The threat is accelerating. But so is the shield — if you build it now.
The Deepfake Threat Timeline
Understanding the trajectory helps organisations prioritise their investment and preparation. The window for proactive defence is open now — but it will not remain so indefinitely.
How PeopleShield Approaches This Challenge
We don't just talk strategy — we deliver it. Full programs from scratch: deepfake simulations using your own executives' communication patterns, custom training curricula, culture audits, and ongoing reinforcement nudges calibrated to your organisation's specific threat exposure.
Already have pieces in place? We'll audit, refine, and scale what's working. Think quiet partnership, not loud overhaul. Security that feels natural to your workforce, not forced on them.
The most important thing about deepfake defence is this: it cannot be solved by technology alone. The attack surface is human — and the defence must be too. Organisations that understand this, and act on it now, will be in a categorically different position in 2027 than those still treating deepfake awareness as a future problem.
What this means for your organisation
Build the human firewall before the attack arrives.
APEX delivers deepfake-aware training, realistic simulations, and the behavioural reinforcement that turns awareness into instinct. If the evidence in this article concerns you, the APEX program page is the right next step.


