EMERGING THREAT

AI-Powered Payment Fraud

Deepfake voices. Cloned executives on video calls. AI-written emails indistinguishable from the real thing. The old ways of verifying payments no longer work.

+257% deepfake incidents
$25M lost in one deepfake attack
40% of BEC emails are AI-generated

Everything you trusted is now fakeable

For years, the advice for preventing payment fraud was simple: "Call them to check." If a supplier requests new bank details, call them on a known number to confirm. If your CEO emails an urgent transfer, call them directly.

That advice assumed the voice on the phone was real. It assumed the person on the video call was who they appeared to be. It assumed emails from a legitimate address were written by a legitimate person.

None of these assumptions are safe anymore. AI has made every traditional verification channel unreliable.

How AI is being used for payment fraud

Criminals are using the same AI tools as everyone else, but for impersonation, manipulation, and theft.

Deepfake video calls

In February 2024, an employee at Arup (Hong Kong) was tricked into transferring $25 million after a video call with what appeared to be the company's CFO, deepfaked in real-time. The employee saw a familiar face, heard a familiar voice, and followed instructions. Every participant on the call except the employee was AI-generated.

This is the largest known single deepfake fraud loss to date.

Voice cloning

AI voice cloning needs as little as 3 seconds of audio to create a convincing copy of someone's voice. Scammers use clips from conference recordings, YouTube, podcasts, or voicemail greetings to clone a CEO or finance director's voice, then call the AP team directly to authorise a payment.

AI-generated BEC emails

By mid-2024, an estimated 40% of BEC phishing emails were AI-generated. These emails are grammatically perfect, match the tone and style of the person being impersonated, and are specifically crafted to bypass spam filters. The days of catching scams by looking for spelling mistakes are over.

Automated reconnaissance

AI tools scrape LinkedIn, company websites, news articles, and financial reports to build detailed profiles of targets. What used to take a scammer weeks of manual research now takes minutes, enabling them to target 400+ companies per day.

The scale of AI-powered fraud

20%

of Australian businesses received deepfake threats in the past 12 months

Mastercard, 2024

36%

of Australian consumers targeted by deepfake scams (Oct 2023 - Oct 2024)

Mastercard, 2024

+257%

increase in global deepfake incidents year-on-year

Deepstrike, 2024

179

deepfake incidents in Q1 2025 alone, exceeding all of 2024 (150)

Deepstrike, 2025

400+

companies targeted per day using AI-powered CEO fraud

Deepstrike, 2024

$40B

projected AI-facilitated fraud losses by 2027 (US)

Deloitte, 2024

Why "call them to check" no longer works

Every traditional verification method can now be defeated by AI.

Phone callbacks

Voice cloning means the person answering could sound exactly like your supplier. You'd have no way to tell the difference.

Video confirmation

Real-time deepfakes can generate convincing video of anyone. The Arup attack proved that even video calls can be completely fabricated.

Email verification

AI-generated emails are grammatically perfect and match the sender's style. If the email account is also compromised, there's nothing to flag.

Manual document checks

AI can generate convincing invoices, letterheads, and supporting documents. Manual visual inspection is unreliable against AI-generated forgeries.

Verification that AI can't fake

ezyshield doesn't rely on emails, phone calls, or visual checks. It verifies identity, business registration, and bank account ownership through channels that can't be impersonated.

Biometric, not verbal

Identity is verified through biometric checks, not phone calls or video calls that can be deepfaked. The verification happens through secure channels, not communication channels.

Registry-verified, not email-verified

Business details are checked against ABR and ASIC in real time, not by asking someone to confirm via email. AI can write emails. It can't alter government registries.

Fingerprinted, not trusted

Verified details are cryptographically fingerprinted. Any change, regardless of how convincing the request looks, requires full re-verification. Trust is replaced with proof.

AI is getting smarter. Your verification needs to be smarter too.

ezyshield verifies through channels that AI can't impersonate: biometric identity, government registries, and bank account ownership. Not phone calls and emails.