Since joining BeyondFS last year from Deloitte, I’ve spoken almost daily to senior leaders across Financial Crime – often about AI. There is broad recognition of AI’s potential, but many are frustrated about how to move forward, how to turn experiments into scalable results, and how to do so without losing control.
Our industry is still at an early stage in its use of AI, but we are already seeing clear patterns we can learn from. I outline these below, focusing on how AI can be applied in ways that positively affect productivity, risk effectiveness, and confidence.
Financial Crime leaders are expected to reduce cost while maintaining control, meet governance expectations, and keep pace with a threat environment that is evolving faster than most operating models were designed to handle. Teams are already stretched, so there is little patience for programmes that absorb time and money without changing outcomes.
The real question is whether AI can be used to take workload out of the system with similar or better risk outcomes. A practical test is whether organisations can move beyond experimentation and put a small number of AI-enabled capabilities into production, removing work in first line operations and reducing friction and rework in second line oversight.
Financial Crime teams have seen technology waves come and go. Some delivered efficiency gains, but few changed the underlying shape of the function. The same work still happens in broadly the same way, with the same dependencies, controls and hand-offs, just slightly faster.
What distinguishes this wave of AI is its ability to work with unstructured inputs, and to apply judgement more consistently at scale. It changes how information is assembled and how decisions are made.
Another difference is accessibility. Practitioners can now build their own prototypes and micro-tools in safe, non-production environments. This can be positive, accelerating learning and bringing Compliance and Tech and Data teams closer together through work on real problems.
But it also raises practical questions. How far should experimentation go before it becomes technology delivery? When should it move under formal governance, including secure build standards and change control?
These questions become even more important as organisations begin to explore agentic AI. Rather than automating individual steps inside established processes, agentic AI can change how work enters the function and how responsibility moves between human and machine. Workflows themselves change, as well as isolated activities being optimised.
This is a structural shift in how Financial Crime work gets done.
Scepticism about AI is understandable. The technology is rarely landing in clean, well-engineered environments. It is arriving in functions that are expensive, complex and hard to scale without adding headcount.
That is why AI is often bolted onto existing workflows to show progress quickly. Demos are impressive and early results look encouraging, but the underlying way of working does not change. As a result, benefits that can be rolled out at scale are hard to evidence.
Organisations will not sustain investment in AI pilots unless they remove workload, alter the resourcing footprint and change the economics of the function. What we are hearing in the market reflects this, and is increasingly supported by external evidence, including:
- Scale of failure: Research from MIT in 2025 suggested that as many as 95% of generative and agentic AI pilot projects did not deliver a clear return on investment.
- Data quality constraints: The FinCrime Frontier Report (2025) found that while around 80% of financial institutions plan to innovate with AI, only 11% are very confident in the quality of the data underpinning their systems.
- The black box problem: Regulatory expectations from bodies such as the FCA and EBA around explainable AI continue to limit scale. Many pilots struggle to progress because models cannot provide a clear audit trail explaining why a customer was flagged. Consilient (2025) reported that 47% of compliance officers are unable to translate technical AI output into a defensible narrative for a regulator.
- Operational friction versus AI hype: Fenergo (2025) reported that 70% of clients were lost due to slow onboarding, despite AI adoption doubling year on year. In commercial banking, onboarding timelines remain around six weeks, highlighting that AI has yet to resolve end-to-end operational bottlenecks.
Once introduced, AI begins to influence judgement and priorities. It raises questions about who owns decisions, how they are governed, and how work moves through the function. These questions are not about technology, but about the operating model.
AI changes both the operating model and the economics of Financial Crime. Over time, fewer people focus on pure production work, while more effort shifts into oversight, quality assurance, exception handling, data stewardship and model performance management. Senior management will expect clarity on how AI is being used safely and effectively, and regulators will expect clarity on risk decisions and their explainability.
This is also where the join between lines of defence becomes critical. If AI improves first line throughput but increases second line review effort, cost moves rather than comes out.
Organisations are at different points in their use of AI. They will move at different speeds, take different routes, and pass through different phases depending on their starting point, risk appetite, data maturity and operating model constraints.
Progress depends on putting the right foundations in place for each step change. That includes clarity on data ownership and quality, governance that supports safe use, clear accountability across the lines of defence, and practical routes from experimentation into production.
For many, a fully AI-enabled operating model is still some way off. Early deployments therefore need to be guided by a clear sense of direction, or they risk becoming isolated experiments rather than building blocks for broader change.
Equally important, organisations need to prove tangible gains as they move forward. Progress is easier to sustain when each step delivers visible benefit and when the requirements to move further are understood.
An AI readiness assessment can help show where your organisation currently sits, and this is often relatively straightforward. The greater challenge lies in navigating the practical obstacles to getting AI into production and delivering value with a clear return on investment.
A perfect end-state design is not needed before acting. What matters is clarity on where you are today, where you realistically want to be, and over what timeframe, so early deployments move in the right direction and can be absorbed into business as usual.
Progress should be judged by outcomes. One deployment may be small, another transformational, and the right pace of change will vary by firm. The priority is to get something into production that delivers a measurable outcome, whether through reduced manual effort, improved productivity per colleague, reduced losses (e.g., fraud), or increased confidence with executives and regulators.
Experimentation has immense value and helps build much-needed AI fluency across Financial Crime teams. But where I expect AI to make a meaningful difference is when firms bring discipline to how AI moves into production.
An AI-native delivery approach that applies proper engineering, governance and control will ensure that promising design work translates into solutions that are safe, secure, and capable of delivering the outcomes they were built for.
