The numbers tell a sobering story: online scams are not only becoming more prevalent but dramatically more sophisticated.
In 2024, Americans reported a staggering $16.6 billion in losses to the FBI’s Internet Crime Complaint Center—a 33% increase from the previous year.
But beyond the financial toll, a more troubling pattern emerges from the data: scams are evolving faster than our ability to detect them.
The Perfect Storm: AI Meets Fraud
The single biggest factor making scams harder to spot is artificial intelligence. Generative AI has fundamentally transformed the fraud landscape, giving even low-skill criminals access to Hollywood-quality deception tools. The statistics are alarming:
- Deepfake fraud has exploded. Financial institutions reported a 2,137% increase in deepfake attacks over the past three years, according to Signicat’s 2024 research.
- What represented just 0.1% of fraud attempts three years ago now accounts for approximately 6.5%—or one in 15 cases. The projected number of deepfake files is expected to surge from 500,000 in 2023 to 8 million by 2025, a 1,500% increase.
- Voice cloning has become democratised. A 2024 McAfee study found that one in four adults have experienced an AI voice scam. The barrier to entry has collapsed: scammers need just three seconds of audio to create a convincing voice clone with an 85% match to the original speaker.
- The technology is accessible and cheap. Creating a convincing deepfake robocall costs approximately $1 and takes less than 20 minutes. Free voice cloning software searches on Google increased 120% between July 2023 and 2024.
- As Capitol Technology University noted, AI is being used to write more believable phishing emails, create fake social media profiles, and generate fraudulent job postings—all at scale.
The Human Detection Problem
Perhaps most concerning is how poorly humans perform at detecting these AI-powered scams. The data reveals a troubling reality:
- Human detection rates for high-quality video deepfakes are just 24.5%, according to academic research
- Only 27% of people can identify whether a friend or loved one on a call is genuine or AI-generated
- For altered text, detection rates drop to 57%
- Nearly half of tested individuals (48.2%) cannot recognize a real versus deepfaked photo—worse than a random guess
- People found deepfaked faces to be 7.7% more trustworthy than real people
F-Secure’s 2024 Living Secure survey of 7,000 people found that 70% don’t know who to trust online, and generative AI is explicitly cited as making scams increasingly difficult to detect. A separate survey found that 91% of Americans believe scams have become more sophisticated in 2024.
The Scale of the Problem
The sheer volume of scam attempts has reached crisis levels:
Daily bombardment. According to F-Secure, 85% of people received a digital scam attempt in 2024, with 40% receiving suspicious emails, messages, or phone calls every day.
More than one-third (36%) reported receiving more scam attempts than just 12 months earlier. Yet 44% don’t report these attempts, leaving authorities with an incomplete picture of the threat.
Rising victimisation rates. One in three people who reported fraud to the Federal Trade Commission in 2024 said they lost money, up from one in four the previous year.
F-Secure found that 34% of people experienced a cyber scam over the past 12 months—a 7% increase since 2022. The Pew Research Center reported that 73% of U.S. adults have experienced things like credit card fraud, ransomware, or online shopping scams at some point.
Increasing effectiveness. In 2015, 71.2% of online scam victims lost money. By 2023, that number had climbed to 82.6%, suggesting fraud tactics are indeed becoming harder to detect and more successful.
High-Stakes Corporate Breaches
While individual consumers bear much of the burden, corporations face catastrophic losses from sophisticated scams:
The most dramatic example occurred in February 2024, when a finance worker at global engineering firm Arup transferred $25 million after participating in a video conference call with what appeared to be the company’s CFO and several colleagues.
Every person on the call was a deepfake. The average loss per deepfake incident for businesses in 2024 reached $500,000, with large enterprises experiencing losses up to $680,000.
Business email compromise (BEC) remained the second most profitable cybercrime in 2024, generating $2.77 billion in reported losses.
Nearly $8.5 billion in BEC losses were reported to the FBI between 2022 and 2024. The average loss per complaint filed with the FBI reached $19,372 in 2024, up from $14,197 in 2023.
The Age Paradox
Conventional wisdom suggests older adults are most vulnerable to scams, and the data partially supports this—but with surprising nuances. People over 60 filed 147,127 complaints with the FBI in 2024, representing a 46% increase from 2023, with total losses of $4.88 billion. California seniors alone lost $832 million.
However, younger adults are not immune. People aged 20-29 reported losing money to scams more often than those 70 and older, according to FTC data.
The difference is in the amount: when older adults lost money, they lost far more than any other age group—an average of $83,000 for the 7,500 complainants over 60 who lost more than $100,000 each.
Notably, 69% of Americans believe more young people are becoming scam victims in 2024, possibly because sophisticated tech-based scams increasingly target younger, digitally native populations through social media and online platforms.
Specific Scam Types Surging
The data reveals which scams are proving most difficult to detect:
Investment fraud led all categories with $6.57 billion in losses reported to the FBI in 2024. Cryptocurrency-related scams were particularly devastating, with nearly 150,000 complaints involving digital assets and $9.3 billion in losses—a 66% increase from the previous year. Crypto accounted for 88% of all deepfake cases detected in 2023.
Job scams have tripled. Between 2020 and 2024, reports of fake employment schemes nearly tripled, with losses growing from $90 million to $501 million, according to the FTC. Capitol Technology University noted that AI is helping fraudsters create convincing job postings that prey on vulnerable job seekers affected by widespread layoffs.
Tech support fraud generated $1.46 billion in losses, while extortion and sextortion complaints surged 59% to 54,936 cases with $33.5 million in losses.
Emerging tactics include QR code phishing (“quishing”), which succeeds because many people don’t think twice about following a link, and toll scams—fake notifications about unpaid road tolls—which accounted for 59,271 complaints and $129,624 in losses.
The Detection Gap
While defensive technologies are improving, they’re not keeping pace with threats. State-of-the-art open-source deepfake detectors experience performance drops of up to 50% when tested against new, real-world deepfakes not found in their training data.
The AI fraud detection market is growing at 28-42% annually, but deepfake content is projected to increase by 900% annually—a critical vulnerability gap.
Only 22% of financial institutions have implemented AI-based fraud prevention tools, according to Signicat’s report, leaving many companies vulnerable to sophisticated attacks.
Furthermore, a 2024 survey found that about one in four company leaders had little to no familiarity with deepfake technology, and 32% had no confidence their employees could recognize deepfake fraud attempts.
The Email Problem Persists
Despite decades of warnings about phishing, email remains the primary vector for scams.
F-Secure found that 54% of cyber scams are conducted via email, and sending/receiving emails was the top digital activity at 88% weekly participation. The FBI received 193,407 phishing/spoofing complaints in 2024—the most reported crime type.
However, scammers are diversifying. People lost over $3 billion to scams that started online (websites, social media, apps) compared to approximately $1.9 billion from more traditional contact methods. Websites were responsible for 36.7% of online scams, while social media accounted for 20.3%.
What the Trends Suggest
The data points to several clear conclusions:
- Scams are objectively harder to detect. The combination of AI-generated content, sophisticated social engineering, and multi-channel attacks has created scenarios where traditional detection advice—like “look for spelling mistakes” or “check the tone”—no longer applies. AI creates flawless, convincing content that passes the eye test.
- Volume amplifies difficulty. The sheer number of daily scam attempts creates decision fatigue. When 40% of people receive suspicious communications daily, distinguishing legitimate from fraudulent becomes exhausting.
- Trust has eroded. With 70% of people unsure who to trust online and 80% worried about their online safety, the psychological environment makes everyone more susceptible to sophisticated deception.
- The speed of evolution is accelerating. North America experienced a 1,740% increase in deepfake fraud, while fraud attempts overall spiked 3,000% in 2023. Synthetic identity fraud surged 31% in 2024, and fraud losses from generative AI are expected to grow from $12.3 billion in 2024 to $40 billion by 2027—a 32% compound annual growth rate.
Yes, online scams are definitively getting harder to detect
The statistics paint a clear picture of adversaries leveraging cutting-edge technology to scale sophisticated attacks while defenders struggle to keep pace.
The 33% year-over-year increase in losses, combined with declining human detection rates and surging AI-powered fraud, suggests the problem will worsen before it improves.
The most concerning finding may be the detection-versus-creation disparity: while humans can spot high-quality video deepfakes only 24.5% of the time, and detection tools are improving at 28-42% annually, deepfake content is growing at 900% per year.
Traditional fraud prevention advice is becoming obsolete in the face of AI that can perfectly replicate voices, faces, and writing styles.
For individuals and organisations alike, the data suggests a sobering reality: in the arms race between scammers and defenders, the scammers are currently winning.
The question is no longer whether scams are getting harder to detect—the statistics confirm they are—but rather how quickly can defensive technologies and human awareness catch up to this rapidly evolving threat.
Sources: FBI Internet Crime Complaint Center 2024 Annual Report, F-Secure Living Secure Survey 2024, Pew Research Center, Federal Trade Commission, McAfee, Signicat, Bolster, Capitol Technology University, DataVisor, and multiple academic studies on deepfake detection.

