Scammers have weaponised artificial intelligence at unprecedented speed and scale, unleashing a wave of global fraud that has reshaped the cybercrime landscape, according to a series of alarming new reports.
Criminals are now deploying AI-generated deepfakes, voice clones, phishing emails, and fake websites at what experts describe as an “industrialised” pace — targeting individuals and businesses worldwide with unprecedented precision.
Microsoft’s latest Cyber Signals report revealed it blocked $6.28 billion in fraud attempts and 1.6 million fake bot account creations every hour between April 2024 and April 2025.
The company’s Anti-Fraud Team has seen a surge in activity from China and Germany, with fake AI websites and tech support scams flooding global networks.
Google’s Mandiant team uncovered a vast campaign of malicious AI-themed websites, linked to threat actors in Vietnam, designed to steal personal data and distribute malware under the guise of popular AI tools.
Meanwhile, blockchain firm Chainalysis reported a record-breaking $12.4 billion in cryptocurrency scam revenues in 2024, with AI-fueled “pig butchering” scams — long-term deception schemes luring victims into fake crypto investments — making up 33% of that total.
“The game has changed,” said Professor Matthew Warren, Director of RMIT’s Centre for Cyber Security Research and Innovation. “The scams aren’t new — but AI has made them faster, cheaper, and terrifyingly convincing.”
AI technologies have enabled fraudsters to scale operations with minimal effort, bypassing KYC protections, and launching real-time scam calls with cloned voices of victims’ family members or company executives.
Illicit marketplaces like Huione Guarantee have processed $70 billion in crypto since 2021 and saw 1,900% growth in AI scam software sales in 2024 alone.
Authorities across the globe, including the U.S. Treasury, have begun cracking down — recently sanctioning firms like Funnull Technology Inc. in the Philippines for enabling AI-driven fraud infrastructure.
With scammers now operating like tech startups, cybersecurity experts warn: we’ve entered a new era of deception — one where any voice, face, or website could be a lie, built by AI in seconds.