How generative AI is reshaping online fraud — and what defensive AI can, and cannot, do about it. In this arms race, the goal is not perfect security but manageable risk.
In early 2024, an employee at global design consultancy Arup Group joined what looked like a routine video meeting with his finance leadership. On screen, the company’s CFO and colleagues appeared to instruct him to process several transfers. Only later did investigators discover that the “CFO” and others were AI-generated deepfakes; by then, approximately $25 million had been wired to criminal accounts. This case became a warning about a new era of cybercrime. Generative AI has turned phishing from blunt-force spam into multi-channel deception that can convincingly imitate how colleagues write, speak, and even appear on video. Threat intelligence from cybersecurity firms reports significant growth in phishing activity, with more sophisticated campaigns and greater personalisation than in previous years. AI tools allow attackers to generate variants that reference specific projects, corporate language, and organisational structure — making messages far harder to distinguish from legitimate communication. Phishing-as-a-service ecosystems now make it easy for non-technical actors to launch targeted campaigns, packaging ready-made infrastructure, templates, and techniques that can capture credentials and session tokens even when multifactor authentication is in use. The core shift is that deception now adapts to the victim and mirrors their communication patterns, causing traditional signature-based filters to fail when every phishing instance looks slightly different.
AI-generated deception now extends well beyond email. Deepfake video and audio technology enables criminals to create synthetic representations of executives or colleagues who appear to issue urgent instructions — attackers study recordings of leaders and synthesise matching voices and facial expressions to pressure victims into transferring funds or disclosing sensitive information. Investment and financial scams illustrate the full scope: fraudulent platforms combining persuasive generative chat, fake endorsements, and realistic interfaces have extracted substantial sums from victims before the deception is discovered. The underlying principle remains social engineering — persuading humans to act against their interests — but what has changed is the fidelity of the illusion. Visual and auditory cues that once served as authentication signals can now be fabricated. Defenders are responding by deploying their own AI: modern detection systems analyse message content, metadata, and behavioural signals to identify anomalies, and machine learning models can surface patterns across large datasets that human analysts might miss. However, AI is not a complete solution. Models trained on historical data may struggle against adversaries who deliberately craft inputs to evade detection, and continuous retraining alongside human oversight remains essential.
“Neither attackers nor defenders hold a permanent advantage. Security outcomes depend entirely on choices — how systems are designed, how employees are trained, and how technology is governed. Human responsibility remains central.”
Three scenes now occur routinely in organisations. In the first, a finance manager receives instructions — apparently from leadership — to process an urgent payment. The message references internal details and is followed by a verification call with a convincing synthesised voice. The request is later revealed as fraudulent; the human error was not ignorance of security rules but reliance on signals that AI can now imitate perfectly. In the second, an employee clicks a security alert and reaches a login page that looks entirely genuine; behind the scenes, an adversary proxies authentication traffic to capture credentials and session tokens. Multifactor authentication reduces but does not eliminate the risk when attackers target the authentication process itself. In the third, investors interact with “advisors” whose apparent knowledge and responsiveness come from generative models directed to mimic legitimate services; by the time the fraud is discovered, funds have vanished. In each case, the deception targets human judgment rather than technical vulnerabilities, and technology alone cannot eliminate that risk.
The solution is not to reject AI but to integrate it responsibly within a layered defensive architecture. Behavioural and contextual detection can identify anomalous communication patterns; phishing-resistant authentication binds credentials to specific devices and sites so that stolen tokens are insufficient for access; process controls require out-of-band verification for high-risk transactions; and security awareness programmes must reflect modern threats and realistic scenarios rather than yesterday’s spam. These measures acknowledge that security is systemic, not purely technological. Authentication should bind logins to devices and websites so stolen credentials are insufficient for access, and high-value actions should require confirmation through channels entirely outside the original message thread. AI-generated phishing is a documented and growing threat: generative models have lowered the barriers to social engineering, enabling criminals to operate at scale and adapt dynamically to defences. The contest is ongoing, and neither side holds a permanent advantage. Security outcomes depend entirely on choices — how systems are designed, how employees are trained, and how technology is governed. Organisations that combine intelligent tools with sound governance and security culture are better positioned to navigate this evolving threat landscape. In this arms race, the goal is not perfect security, but manageable risk.
References
- Hong Kong Police Force. (2024). Police appeal to public regarding deepfake video conference fraud case. Hong Kong: HKPF Press Release [Re: Arup Group $25M deepfake incident, February 2024].
- Europol. (2023). ChatGPT and Large Language Models: The Dark Side of AI. The Hague: Europol Innovation Lab.
- Zeng, V., & Jiang, R. (2023). Phishing-as-a-Service and the rise of adversary-in-the-middle toolkits. IEEE Symposium on Security and Privacy Workshops, 2023.
- ENISA. (2023). ENISA Threat Landscape 2023: Social Engineering. Athens: European Union Agency for Cybersecurity.
- NIST. (2023). NIST SP 800-63B Digital Identity Guidelines: Authentication and Lifecycle Management. Gaithersburg, MD: National Institute of Standards and Technology.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Cambridge, MA: MIT Press. [Foundational reference on adversarial machine learning underpinning both offensive and defensive AI systems.]
About the Author
Dr. Sarath Kappagantula Venkata Nageshwara holds a Ph.D. in Machine Learning and Cybersecurity and is a researcher and distributed systems practitioner specialising in scalable AI, cloud-native engineering, and production-grade machine learning solutions. Currently serving as a Senior Software Engineer and Distributed Systems Lead, his work focuses on intelligent automation and model optimisation, bridging academic research with practical implementations in cybersecurity and enterprise AI systems.