In almost every conversation about AI in financial crime, and much of what is written on the subject, there is a point of view that has become so widely accepted it is rarely questioned.
Before you can do anything meaningful, you need strong foundations. Clean data. Unified architecture. Clear governance and well-defined controls. Only once those are in place, the argument goes, are you ready to experiment.
On the surface, it sounds sensible and aligns with simple ‘garbage in, garbage out’ reasoning. In practice, it is often the main reason AI initiatives fail to get off the ground.
In reality, AI can start delivering value without perfect foundations, and many of those foundations improve faster once AI is in use.
A common concern among MLROs and compliance leaders is the risk of uncontrolled experimentation. The worry is that once AI work gets going, it will race ahead, governance will struggle to keep up, and the organisation will find itself exposed to risk it cannot evidence or explain.
The concern is understandable. But in reality, governance tends to work better when it develops alongside each AI use case, in proportion to its risk and maturity.
Early AI uses should be tightly scoped, reviewed by existing risk owners, and run in parallel with current processes, with controls strengthened as confidence grows. This risk-based, staged approach is familiar to banks from existing model risk practices, such as running new fraud models in parallel with existing ones before promoting them into production.
AI can follow a similar pattern, with one important difference. Models and prompts do not stand still after go-live. They can be refined continuously, which means governance needs to support controlled iteration rather than long periods of stability.
The same thinking applies to data quality. Treating clean data as a pre-condition for AI delays progress and hides issues that only become visible once work begins. Early use cases can start with limited or anonymised data, with explainability and traceability designed in from the outset and more formal audit and change controls added as reliance grows.
Used carefully, AI exposes weaknesses in records and processes, creating a feedback loop that strengthens data and controls overtime. Teams mature alongside the technology, with early use cases refined by subject matter experts and later scaled by engineering teams, building trust through use rather than one big launch.
Periodic KYC refresh is a good use case to explore, as it remains operationally difficult and expensive for many banks.
In a typical model, the process is largely manual.Cases are triggered on a schedule. Analysts gather information from internal systems, cross-check it against external registries, investigate discrepancies and document their work. The process is slow and outcomes are inconsistent.
Now consider the same process running in parallel with an AI model.
When a scheduled refresh is triggered for a corporate entity, the model retrieves the internal record from the core KYC system or CRM and compares it against trusted external sources, such asCompanies House filings. Using techniques such as OCR and natural language processing, it can interpret unstructured documents that analysts currently review manually.
The model may identify a discrepancy, for example a change in the Ultimate Beneficial Owner structure or an updated Person withSignificant Control status that is not reflected internally.
Many tools would simply flag an exception at this point and pass the case to a human.
A well-designed AI model goes further by proposing a remediated data record. It reconstructs what the internal record should look like, based on the strongest available evidence, and links each proposed change back to its source so reviewers can see exactly how the conclusion was reached.
Nothing is written directly back into the core system. The proposed update is presented in a staging environment, where a relationship manager or KYC analyst can review the before-and-after view, assess the evidence and decide whether to apply or reject the change.
Humans stay firmly in control, but they are no longer doing the manual work of searching, copying and reconciling data. Instead, their role is to make informed judgements.
This approach is explainable by design. Every suggestion, the evidence behind it, and the final human decision are recorded in a readable audit trail that can be reviewed internally or shared with auditors and regulators.
The idea that organisations are ‘not ready’ for AI usually reflects concerns about scope, risk and control. These are unlikely to be resolved by a one-off initiative, and will never be resolved by waiting.
Unlike traditional software, AI evolves over time, so readiness and governance are best developed alongside each other. Safeguards can be introduced deliberately, and humans can retain oversight while automation does the heavy lifting.
There is an opportunity cost to the ‘foundations-first’ mindset. While banks spend months or years trying to perfect data estates and internal operating models, financial crime continues to evolve. Fraudsters and organised criminals are not working to three-year roadmaps. They are already using automation and AI, testing what works, discarding what does not, and iterating at speed.
The irony is that an approach intended to reduce risk can end up increasing it. Delaying practical AI adoption in the name of readiness leaves organisations exposed to threats that are moving faster than their change programmes.
Instead of worrying about readiness, a more useful question is whether the right controls are in place to start safely. Keep humans in the loop, measure performance properly, and scale in proportion to risk and confidence. This is how complex operational systems have always been built and improved. AI is no different, but it shortens the feedback loop and raises the cost of standing still.
