Fraud is getting more sophisticated, thanks to artificial intelligence (AI).
Fraud can be perpetrated in the form of deepfake videos or voice, with AI producing a clone of a family relative supposedly in an emergency and needs a cash transfer immediately. AI can write more convincing phishing emails, removing telltale signs such as broken English. AI can also fake images like a driver’s license to fool and scam people, according to an FBI report.
“Fraud is only going to get worse with the creation of generative AI,” said Mike de Vere, CEO of Zest AI, which leverages AI to help financial services firms make more informed lending decisions and mitigate fraud incidents.
According to a March 2025 report from the U.S. Federal Trade Commission (FTC), the amount of losses due to fraud hit $12.5 billion in 2024, up 25% from the prior year. More people also reported they lost money due to fraud: 38% last year compared to 27% in 2023.
Investment scams led people to lose the most money, totaling $5.7 billion, up 24% from the year before. The second highest were imposter scams, at $2.95 billion. However, imposter scams were the most commonly reported fraud, with online shopping fraud next.
Notably, consumers lost more money to scams through bank transfers or cryptocurrency than all other payment methods combined, the FTC said.
According to a PYMNTS Intelligence study in partnership with i2c, 28% of consumers have fallen victim to credit card fraud last year. Moreover, 37% said they were “very” or “extremely” worried about falling victim to such fraud, according to “Consumer Credit Economy: Credit Card Fraud.”
In an interview with PYMNTS, de Vere said fraud losses are projected to reach $40 billion by 2027. Fraud tools are becoming more accessible, he added, noting that for as little as $20, criminals can do things like create fake IDs and pay stubs.
Read more: 37% of Consumers Highly Concerned About Credit Card Fraud
What Financial Institutions Wrongly Believe
Based on his experience working with banks and credit unions, de Vere shared his insights on five myths about fraud prevention that could leave organizations vulnerable.
Myth 1: Small Banks Are Safe Against Fraud
The first misconception is that fraudsters only target major financial institutions. In reality, 8 out of 10 banks and credit unions, including smaller ones, reported fraud losses exceeding $500,000 last year.
“It disproportionately impacts smaller financial institutions,” de Vere said. “A fraudster going up against Citi’s IT team is probably going to be less successful than [targeting] a tiny credit union that outsources their IT.”
Myth 2: Transaction Monitoring Is Enough
Many institutions believe that monitoring individual transactions provides adequate fraud prevention protection. For example, looking at a customer’s credit card patterns to spot whether there’s a fraudulent purchase.
However, de Vere said this narrow approach misses the broader behavioral patterns that AI can detect. He shared this real-world example: A fraudster opened a credit card at a credit union, charging about $100 a month and paying it off regularly. By itself, this behavior doesn’t raise red flags. However, this criminal was doing the same thing at several credit unions, de Vere said. The individual eventually applied for and received personal loans, maxed out the credit cards and disappeared with the money.
Myth 3: Security Requires Friction
The third myth revolves around the idea that to be secure, a financial institution has to put the customer through several hoops such as asking for the answer to a security question and the like. It creates friction in the customer experience. These binary fraud systems — is it a fraud or not? Yes or no — can create problems unnecessarily, de Vere said.
He shared his personal experience of being flagged for ID fraud during an auto loan application simply because his last name was squished together. “An AI solution could have looked at my credit report and seen that … two of my credit cards actually have my last name smashed together, so it’s probably not likely that I’m a fraudster.”
Myth 4: Manual Reviews Catch Fraud
Humans are supposed to be the gold standard when it comes to catching fraud, but de Vere argued that they are only as good as their experiences. Moreover, manual reviews are limited by the reviewer’s experience within an institution.
In contrast, an AI model can consume trillions of points of data to identify patterns of fraud. “It’s so far beyond where a human can be,” de Vere said.
Myth 5: All Fraud Solutions Are Equal
The final myth is that fraud prevention solutions are interchangeable. De Vere said that many available solutions are incomplete, creating blind spots in security coverage.
He said a robust fraud prevention solution should offer probability scores rather than binary “fraud/no-fraud” decisions, be trained on comprehensive datasets and tailored to an organization’s needs and geographic location. This approach lets organizations identify local fraud rings and deploy appropriate security measures.
Advocatong for a collaborative approach to fighting fraud, de Vere said, “We need to be thinking less about it being a competitive issue and more about it being a collaborative issue.”
To that end, Zest AI has created a consortium to share fraud experiences, enabling AI models to learn from attacks on one institution to protect others in the same ecosystem.