It seems that AI has become the default answer to almost every operational challenge in Financial Crime.
At a recent roundtable we hosted, the overwhelming message in the room was that AI solutions were being pushed in response to every problem statement in their organisation. One compliance leader even said that their end-of-year performance objectives were tied directly to AI usage.
In my view, when the incentive structure rewards AI adoption as an end in itself, you have a problem.
The risk is that AI is seen as the solution even before the problem has been properly understood. When AI is prioritised over a simpler solution, the result can be unnecessarily unwieldy and poorly implemented.
This article is about helping compliance teams slow down long enough to ask the right question before choosing a solution – and to push back, where needed, on the pressure to reach for AI by default.
AI has been a buzzword in Financial Crime for years, but what has changed is its accessibility. Tools like ChatGPT and Copilot have put powerful capabilities directly in the hands of non-technical teams. Compliance Analysts and Operations Leads can now prototype solutions through 'vibe coding' without involving engineering. That is genuinely exciting, but it also means people who have never been through a disciplined software development lifecycle (SDLC) are now making technology decisions.
In my experience, this is where things go wrong most often. The problem is not that non-technical people are experimenting – it is that they are jumping straight to building solutions without properly defining the problem they want to solve. Powerful tools in the hands of non-experts, without the right process to support them, can lead to expensive mistakes.
A useful concept to keep in mind is what we call ‘solution elegance’ – or the ‘KISS’ principle: Keep It Stupidly Simple. In practice, this means selecting the simplest solution that adequately solves the problem, rather than defaulting to the most sophisticated option available. AI-first thinking can lead to over-engineered solutions that are expensive to build, costly to maintain and difficult to explain to regulators. When assessing technology options with clients, we evaluate them across multiple factors: total cost of ownership, complexity to maintain and enhance, speed to deploy and explainability to oversight functions.
The table below illustrates the two ends of that spectrum, using fraud detection as an example:
An over-engineered solution slows down the ability to respond to financial crime challenges, increases cost and reduces the ability to explain decisions to regulators.
AI offers powerful tools for tackling genuinely complex problems – reducing screening alert volumes, identifying sophisticated network patterns, improving data quality at scale. But scripting, automation, rule-tuning and other established techniques should not be abandoned just because AI has arrived. The most effective compliance teams use the full toolkit, matching each problem to the solution that best fits it.
If you take one thing from this, it should be that you have every right to push back when someone tells you the answer is AI before anyone has properly defined the question.
But applying that thinking in practice is not always straightforward. Complex data environments, regulatory expectations and internal pressures can make it difficult to stay focused on the simplest effective solution.
BeyondFS can assist with this. We help Financial Crime leaders clarify the problem, avoid over-engineering, and apply the right solution with pace and discipline – reducing risk, improving performance, and delivering outcomes that stand up to scrutiny.
The best solution depends on what matters most in your context, whether that is speed to deploy, cost, explainability, or ease of operation.
We work with clients to define that clearly upfront through structured requirements, target state design and disciplined vendor selection, before any commitment is made. If that is where you need support, get in touch.
