Download the PDF version
Prevention and detection

The fraud operational cost crisis: why the current model can’t scale

Published:
12/1/2026

“How many analysts would it take to clear the queue by Friday?” It’s a question that echoes through every fraud operations meeting. The answer is rarely good news.

Case volumes keep growing. Hiring can’t keep pace. Investigation times stretch from days to weeks. Banks spend more, yet results don’t improve. Fraud operations have quietly entered a cost crisis, driven by an operating model that leans too heavily on manual intervention to connect signals that systems should already understand.

In this article, we explain why the next generation of defence will depend on automation that’s explainable, traceable, and built on verified truth.

Where the current model breaks down

Fraud detection has become too fragmented to manage manually. Every new channel, product, and vendor adds more alerts, dashboards, and handoffs. Analysts spend most of their time reconciling evidence rather than making decisions.

The cracks show up in three places:

  • Volume: alerts are growing faster than analyst capacity. Even the best teams can’t keep up when every new payment type or campaign adds more signals to review.
  • Variability: attack patterns evolve constantly, but institutional knowledge sits in the heads of experienced analysts. When they move on, so does that knowledge.
  • Visibility: systems still assess one event at a time; one user, one device, one channel. Risk scores pass between systems like shorthand, detached from the evidence that created them.

The result is higher cost, slower response, and rising fatigue. Teams are working harder but seeing less.

Why automation has to come next

Automation has always sounded appealing in theory, but difficult in practice. Many teams have tried it and pulled back. Black-box models make decisions no one can justify, and overconfident rules engines can block genuine customers. So “automation” became a synonym for “workflow shortcuts,” not a real change in capability.

That’s shifting. When decisions are built on deterministic, explainable data - like the signals produced through Cleafy’s Attack Pattern Recognition (APR) - automation becomes safe. Every action and decision carries the evidence behind it.

Automation doesn’t take people out of the process; it puts them in charge of it.

From human-in-the-loop to human-on-the-loop

Fraud analysts shouldn’t be clearing queues; they should be supervising systems that can act on clear, trusted signals.

The new operating model keeps humans in control while freeing them from repetitive tasks. Systems handle what’s predictable:

  • Collecting device, network, behavioural, and transaction evidence into a single case view.
  • Executing pre-approved actions when high-confidence conditions are met, such as holding a payment or forcing step-up authentication.
  • Feeding analyst feedback straight back into baselines to sharpen detection accuracy.

Humans stay on the loop; reviewing exceptions, validating edge cases, and maintaining governance.

Attack Pattern Recognition: The foundation for safe automation

APR gives automation something it has always lacked: verified truth.

It rebuilds how an attack unfolds, linking signals across devices, sessions, and channels to reveal what actually happened. Because each detection is causal, not probabilistic, automated responses can act with confidence.

With that foundation in place, banks can progress naturally through maturity stages:

  1. Automated enrichment: data is collected and correlated automatically.
  2. Confidence-bound actions: trusted signals trigger safe, predefined responses.
  3. Closed-loop feedback: analyst outcomes continuously refine detection.

Each step reduces manual work and improves precision, without losing oversight.

We have explored what APR is and why banks need it now in our previous article “Attack Pattern Recognition (APR): what it is and why banks need it now”.

Governance as a design principle

Responsible automation depends on transparency. Every automated decision should be traceable back to its evidence, thresholds, and policies. Analysts need to see why something happened, auditors need to verify it, and customers deserve to know that it happened for a legitimate reason.

Explainability isn’t an afterthought; it’s the mechanism that makes automation accountable.

Redefining operational success

As automation takes on more of the workload, the way performance is measured will change. Success will be judged by time-to-containment, precision, and automation reliability, not by the number of cases closed.

The work will change, too. Analysts will spend more time validating decisions than collecting data. New roles will emerge around automation engineering, signal quality, and governance.The fraud operations centre becomes an orchestration layer, not a processing line.

Scaling trust, not headcount

Fraud operations today resemble cybersecurity operations from a decade ago: they are heavy on manual triage and light on integration. The same transformation that reshaped the Security Operations Centre (SOC) is now reaching fraud: a gradual shift from manual to semi-automated, then toward autonomous operations.

Attack Pattern Recognition sits at the centre of that shift. It provides verified, explainable data, the only kind that can support safe, auditable automation.

The question isn’t how many analysts to hire, but how much decision-making can be automated responsibly. The institutions that can answer that with confidence will scale faster and run leaner, not because they removed people from the loop, but because they gave them systems worth trusting.

Read more articles

Prevention and detection

The truth about nank fraud typologies: why they’re failing and what attackers exploit

Read more

Prevention and detection

What is a cyber-fraud fusion model? From session visibility to real-time threat prevention

Read more

Prevention and detection

How to spot APP scams before they escalate: Q&A with fraud experts

Read more