In almost every conversation about AI in financial crime, and much of what is written on the subject, there is a point of view that has become so widely accepted it is rarely questioned.
Before you can do anything meaningful, you need strong foundations. Clean data. Unified architecture. Clear governance and well-defined controls. Only once those are in place, the argument goes, are you ready to experiment.
On the surface, it sounds sensible and aligns with simple ‘garbage in, garbage out’ reasoning. In practice, it is often the main reason AI initiatives fail to get off the ground.
In reality, AI can start delivering value without perfect foundations, and many of those foundations improve faster once AI is in use.
A common concern among MLROs and compliance leadersis the risk of uncontrolled experimentation. The worry is that once AI workgets going, it will race ahead, governance will struggle to keep up, and theorganisation will find itself exposed to risk it cannot evidence or explain.
The concern is understandable. But in reality,governance tends to work better when it develops alongside each AI use case, inproportion to its risk and maturity.
Early AI uses should be tightly scoped, reviewed byexisting risk owners, and run in parallel with current processes, with controlsstrengthened as confidence grows. This risk-based, staged approach is familiarto banks from existing model risk practices, such as running new fraud modelsin parallel with existing ones before promoting them into production.
AI can follow a similar pattern, with one importantdifference. Models and prompts do not stand still after go-live. They can berefined continuously, which means governance needs to support controllediteration rather than long periods of stability.
The same thinking applies to data quality. Treatingclean data as a pre-condition for AI delays progress and hides issues that onlybecome visible once work begins. Early use cases can start with limited or anonymiseddata, with explainability and traceability designed in from the outset and moreformal audit and change controls added as reliance grows.
Used carefully, AI exposes weaknesses in recordsand processes, creating a feedback loop that strengthens data and controls overtime. Teams mature alongside the technology, with early use cases refined bysubject matter experts and later scaled by engineering teams, building trustthrough use rather than one big launch.
Periodic KYC refresh is a good use case to explore,as it remains operationally difficult and expensive for many banks.
In a typical model, the process is largely manual.Cases are triggered on a schedule. Analysts gather information from internalsystems, cross-check it against external registries, investigate discrepanciesand document their work. The process is slow and outcomes are inconsistent.
Now consider the same process running in parallelwith an AI model.
When a scheduled refresh is triggered for acorporate entity, the model retrieves the internal record from the core KYCsystem or CRM and compares it against trusted external sources, such asCompanies House filings. Using techniques such as OCR and natural languageprocessing, it can interpret unstructured documents that analysts currentlyreview manually.
The model may identify a discrepancy, for example achange in the Ultimate Beneficial Owner structure or an updated Person withSignificant Control status that is not reflected internally.
Many tools would simply flag an exception at thispoint and pass the case to a human.
A well-designed AI model goes further by proposinga remediated data record. It reconstructs what the internal record should looklike, based on the strongest available evidence, and links each proposed changeback to its source so reviewers can see exactly how the conclusion was reached.
Nothing is written directly back into the coresystem. The proposed update is presented in a staging environment, where arelationship manager or KYC analyst can review the before-and-after view,assess the evidence and decide whether to apply or reject the change.
Humans stay firmly in control, but they are nolonger doing the manual work of searching, copying and reconciling data. Instead,their role is to make informed judgements.
This approach is explainable by design. Everysuggestion, the evidence behind it, and the final human decision are recordedin a readable audit trail that can be reviewed internally or shared withauditors and regulators.
The idea that organisations are ‘not ready’ for AIusually reflects concerns about scope, risk and control. These are unlikely tobe resolved by a one-off initiative, and will never be resolved by waiting.
Unlike traditional software, AI evolves over time,so readiness and governance are best developed alongside each other. Safeguardscan be introduced deliberately, and humans can retain oversight whileautomation does the heavy lifting.
There is an opportunity cost to the ‘foundations-first’mindset. While banks spend months or years trying to perfect data estates and internaloperating models, financial crime continues to evolve. Fraudsters and organisedcriminals are not working to three-year roadmaps. They are already usingautomation and AI, testing what works, discarding what does not, and iteratingat speed.
The irony is that an approach intended to reducerisk can end up increasing it. Delaying practical AI adoption in the name ofreadiness leaves organisations exposed to threats that are moving faster thantheir change programmes.
Instead of worrying about readiness, a more usefulquestion is whether the right controls are in place to start safely. Keephumans in the loop, measure performance properly, and scale in proportion torisk and confidence. This is how complex operational systems have always beenbuilt and improved. AI is no different, but it shortens the feedback loop and raisesthe cost of standing still.
