Smarter tools, higher stakes
The stakes are high, and rising. In 2023, 51% of financial institutions surveyed by BioCatch reported losses of $5 million to $25 million due to AI-driven fraud. As these threats grow more advanced, compliance teams are under pressure to respond just as intelligently.
Machine learning models (MLs) can help spot anomalies in real time, reduce false positives, and free staff to focus on real threats. Deep learning platforms can analyze unstructured data like voice calls, chat logs, and emails to detect fraud signals that static, rule-based systems can miss. Natural language processing tools scan communications for phishing attempts and inconsistencies, while graph analytics platforms enhanced with ML uncover hidden fraud networks by mapping relationships across accounts, devices, transactions, and geographies.
But as the tools improve, so do the tactics used to exploit them. In March 2023, a flaw in an open-source component led to a data breach in ChatGPT. Some users were able to access others’ chat titles and partial payment information. OpenAI took ChatGPT offline and notified affected users, but the incident underscored a broader point: AI systems are only as secure as the code they rely on. And in public platforms, any data entered, however harmless, can wind up in the wrong hands.
That’s why many companies are shifting to private, enterprise-grade AI environments, where usage is governed by stricter security, access control, and data retention policies. As Gartner notes, this move marks a key shift in how institutions are approaching AI risk management.
Governance in the crosshairs
Financial institutions are increasingly relying on AI to detect and prevent fraud, but weak governance can turn these tools into new liabilities. In 2022, Bank of America was fined $225 million for improperly freezing unemployment benefit accounts using faulty automated fraud detection systems, leaving thousands without access to funds.
This stark example underscores the need for stringent governance frameworks. AI can be pivotal in combating fraud, it also raises tough questions about fairness and transparency. Issues like bias, explainability, and data misuse remain top concerns. AI models trained on historical data can reflect and amplify existing disparities. In 2020, facial recognition falsely identified Robert Williams as a robbery suspect, leading to a wrongful arrest. A 2024 study by Lehigh and Babson found that mortgage underwriting models were more likely to deny loans to African American applicants, even when their profiles matched those of white peers, due to biased training data.
To address these risks, financial institutions are turning to explainability tools that reveal how AI models make decisions. By identifying which inputs influenced a particular outcome and why, these tools improve transparency, support compliance, and build trust. But tools alone aren’t enough. Effective governance also requires collaboration between legal, compliance, and technical teams and a clear understanding of current and emerging regulations.
Privacy is another critical concern. A bank using AI to analyze customer transactions for personalized offers must ensure it isn’t violating consent agreements or repurposing data in ways that breach compliance. Even anonymized data can be risky if it’s re-identified or reused beyond its original intent. Institutions need to ask tough questions: Who can access this data, for how long, and under what conditions? Without answers, privacy and compliance risks remain high.
Moving from manual to managed
Many financial institutions still rely on siloed systems, spreadsheets, and manual processes to meet today’s compliance demands. These workflows are slow, inconsistent, and error-prone—especially as regulatory expectations rise and threats intensify.
To keep pace, leading firms are modernizing their infrastructure and governance, shifting from patchwork automation to managed, AI-enabled systems that scale. JPMorgan Chase’s COiN platform, for example, reviews commercial loan agreements using machine learning—cutting what was once 360,000 hours of manual legal work annually down to minutes with near-zero error rates.
Other institutions are automating repetitive checks in Know-Your-Customer (KYC) and transaction monitoring, using natural language models to summarize policies or regulatory filings, and applying generative AI to streamline audit prep and internal reporting. These use cases don’t just cut costs—they also reduce risk by standardizing workflows and surfacing anomalies in real time.
But even the best tools fall short without strong governance and cross-functional coordination. That’s the shift: from adopting individual solutions to managing AI within a framework that includes oversight, accountability, and clear alignment with business goals.
Executive leadership plays a critical role here—translating technical risks into business risks and ensuring AI investments support long-term strategy. When compliance, security, and leadership are aligned, institutions can respond faster and more confidently to emerging threats.
The future of AI-enabled compliance
These shifts mark the beginning of a broader transformation in how compliance is understood and executed. Generative AI is here to stay, and its impact on compliance is only beginning to emerge. To thrive in this new era, firms should treat AI not as a bolt-on addition, but as a core component of modern fraud, compliance, and risk management. Organizations that succeed will be those that govern AI with intention to ensure systems are explainable, auditable, and aligned with enterprise risk appetite. The future of compliance won’t rest solely in the hands of regulators. It will be shaped by the firms that move early, lead responsibly, and build trust through action.
To stay ahead of the AI curve, subscribe to the Perspectives newsletter for strategic insights on compliance, risk, and the future of financial services.