false

Perspectives Home / INDUSTRY INSIGHTS

The New Face of Fraud in Finance

Resilience starts with knowing who’s in charge of the algorithm.

For financial institutions, generative AI is both an opportunity and a challenge. It’s reshaping compliance and the threat landscape in real time. Recently considered experimental, it’s now being embedded across financial services, from onboarding and transaction monitoring to anti-money laundering and reporting. But as adoption grows, so do risks. Deepfakes, phishing, and convincing AI-generated messages that mimic executives are making fraud more scalable and harder to detect, raising the stakes for financial institutions.

 

According to Nasdaq Verafin, more than $3.1 trillion in illicit funds flowed through the global financial system in 2023, with fraud scams estimated to cost $485.6 billion globally. And the hazards are accelerating. Deloitte predicts that AI-enabled fraud losses in the U.S. could more than triple, from $12.3 billion in 2023 to $40 billion by 2027, as threat actors leverage GenAI to automate and personalize attacks at scale. 

 

Meanwhile, compliance is intensifying. Frameworks like GDPR, DORA, and the SEC’s T+1 rule set ever-stricter standards for data protection, real-time reporting, and resilience, requiring firms to prove they can maintain operations during disruptions. DORA, for example, forces financial institutions to prove they can recover from ICT failures such as cyberattacks and vendor outages. In the U.S., SEC rules now require companies to report cybersecurity incidents within four days and document risk practices, making cyber resilience a board-level responsibility. 

 

Further, regulators now expect firms to explain how AI decisions are made—and to hold leadership accountable when they can’t. Under the EU AI Act, institutions must demonstrate transparency, risk mitigation, and human oversight. These rules apply not just to EU-based firms, but to any business that markets or uses AI in the EU, regardless of where it’s based.

per-newsletter-promo-v3-380x253

Resilience starts with strategy

Stay in the know with executive insights on digital resilience, delivered straight to your inbox.

 

Smarter tools, higher stakes

The stakes are high, and rising. In 2023, 51% of financial institutions surveyed by BioCatch reported losses of $5 million to $25 million due to AI-driven fraud. As these threats grow more advanced, compliance teams are under pressure to respond just as intelligently.

 

Machine learning models (MLs) can help spot anomalies in real time, reduce false positives, and free staff to focus on real threats. Deep learning platforms can analyze unstructured data like voice calls, chat logs, and emails to detect fraud signals that static, rule-based systems can miss. Natural language processing tools scan communications for phishing attempts and inconsistencies, while graph analytics platforms enhanced with ML uncover hidden fraud networks by mapping relationships across accounts, devices, transactions, and geographies.

 

But as the tools improve, so do the tactics used to exploit them. In March 2023, a flaw in an open-source component led to a data breach in ChatGPT. Some users were able to access others’ chat titles and partial payment information. OpenAI took ChatGPT offline and notified affected users, but the incident underscored a broader point: AI systems are only as secure as the code they rely on. And in public platforms, any data entered, however harmless, can wind up in the wrong hands.

 

That’s why many companies are shifting to private, enterprise-grade AI environments, where usage is governed by stricter security, access control, and data retention policies. As Gartner notes, this move marks a key shift in how institutions are approaching AI risk management.

 

 

Governance in the crosshairs

Financial institutions are increasingly relying on AI to detect and prevent fraud, but weak governance can turn these tools into new liabilities. In 2022, Bank of America was fined $225 million for improperly freezing unemployment benefit accounts using faulty automated fraud detection systems, leaving thousands without access to funds.

 

This stark example underscores the need for stringent governance frameworks. AI can be pivotal in combating fraud, it also raises tough questions about fairness and transparency. Issues like bias, explainability, and data misuse remain top concerns. AI models trained on historical data can reflect and amplify existing disparities. In 2020, facial recognition falsely identified Robert Williams as a robbery suspect, leading to a wrongful arrest. A 2024 study by Lehigh and Babson found that mortgage underwriting models were more likely to deny loans to African American applicants, even when their profiles matched those of white peers, due to biased training data.

 

To address these risks, financial institutions are turning to explainability tools that reveal how AI models make decisions. By identifying which inputs influenced a particular outcome and why, these tools improve transparency, support compliance, and build trust. But tools alone aren’t enough. Effective governance also requires collaboration between legal, compliance, and technical teams and a clear understanding of current and emerging regulations.

 

Privacy is another critical concern. A bank using AI to analyze customer transactions for personalized offers must ensure it isn’t violating consent agreements or repurposing data in ways that breach compliance. Even anonymized data can be risky if it’s re-identified or reused beyond its original intent. Institutions need to ask tough questions: Who can access this data, for how long, and under what conditions? Without answers, privacy and compliance risks remain high.

 

 

Moving from manual to managed

Many financial institutions still rely on siloed systems, spreadsheets, and manual processes to meet today’s compliance demands. These workflows are slow, inconsistent, and error-prone—especially as regulatory expectations rise and threats intensify.

 

To keep pace, leading firms are modernizing their infrastructure and governance, shifting from patchwork automation to managed, AI-enabled systems that scale. JPMorgan Chase’s COiN platform, for example, reviews commercial loan agreements using machine learning—cutting what was once 360,000 hours of manual legal work annually down to minutes with near-zero error rates.

 

Other institutions are automating repetitive checks in Know-Your-Customer (KYC) and transaction monitoring, using natural language models to summarize policies or regulatory filings, and applying generative AI to streamline audit prep and internal reporting. These use cases don’t just cut costs—they also reduce risk by standardizing workflows and surfacing anomalies in real time.

 

But even the best tools fall short without strong governance and cross-functional coordination. That’s the shift: from adopting individual solutions to managing AI within a framework that includes oversight, accountability, and clear alignment with business goals.

 

Executive leadership plays a critical role here—translating technical risks into business risks and ensuring AI investments support long-term strategy. When compliance, security, and leadership are aligned, institutions can respond faster and more confidently to emerging threats.

 

 

The future of AI-enabled compliance

These shifts mark the beginning of a broader transformation in how compliance is understood and executed. Generative AI is here to stay, and its impact on compliance is only beginning to emerge. To thrive in this new era, firms should treat AI not as a bolt-on addition, but as a core component of modern fraud, compliance, and risk management. Organizations that succeed will be those that govern AI with intention to ensure systems are explainable, auditable, and aligned with enterprise risk appetite. The future of compliance won’t rest solely in the hands of regulators. It will be shaped by the firms that move early, lead responsibly, and build trust through action.

 

 

 

To stay ahead of the AI curve, subscribe to the Perspectives newsletter for strategic insights on compliance, risk, and the future of financial services.

AUGUST 15, 2024

The Cost of Downtime in Banking

 

Read more Perspectives by Splunk

APRIL 17, 2025  •  6 minute read

When to Choose GenAI, Agentic AI, or None of the Above

 

Not every use case calls for AI. Learn how to choose the right model, before it chooses for you.

APRIL 16, 2025  •  6 Minute Read

Elevating Observability with the Power of Platformization

 

Leaning into platformization can simplify observability and speed up insights, but it also comes with trade-offs every exec should weigh.

APRIL 2, 2025  •  7 minute read

Cyber Regulations Are a Goldmine — If You Use Them Right

 

Leverage regulations for a business edge.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.