Logo image
Explainable artificial intelligence models for detecting suspicious bank transactions
Journal article   Open access   Peer reviewed

Explainable artificial intelligence models for detecting suspicious bank transactions

Narasimha Kumar Narasapuram, Syed Afaq Ali Shah, Mohd Fairuz Shiratuddin and Ferdous Sohel
International journal of machine learning and cybernetics, Vol.17(3), 111
2026
pdf
Published5.76 MBDownloadView
CC BY V4.0 Open Access

Abstract

Anti-money laundering Fraud Suspicious transactions
Detecting financial crime is a complex challenge due to evolving criminal strategies and fragmented detection systems, particularly in the areas of money laundering and fraud. While it is easy to implement, traditional rule-based approaches lack adaptability to new threats, and machine learning models, though more effective, often function as opaque "black boxes," limiting their practical use in regulated domains like banking, where interpretability and accountability are essential. This research presents a novel framework that combines intrinsic and post-hoc XAI techniques to detect suspicious bank transactions. Intrinsic methods provide model-inherent transparency, while post-hoc methods offer behavior-level explanations, enabling robust cross-verification of the model’s decision logic. This dual-layered explainability supports both global understanding of the model and local interpretability for individual predictions. Experimental evaluation on synthetic datasets shows that the proposed framework achieves promising results, with F1 scores ranging from 0.54 to 0.63 for anti-money laundering (AML) detection and 0.66 to 0.78 for fraud detection, depending on the underlying model architecture. The use of explainability tools also enables the identification of key features shared across fraud and AML cases, revealing structural similarities. These insights demonstrate that a unified model can effectively capture both crime types, offering practical benefits for integrated financial crime risk management. By addressing the information silos caused by different regulatory frameworks and departmental responsibilities, this work supports the development of joint investigative systems and facilitates better communication between compliance teams. In addition, the explainable nature of the models helps satisfy regulatory scrutiny while improving investigator trust and decision support. The proposed approach contributes to the growing field of interpretable machine learning by providing a scalable, auditable, and regulator-friendly solution for detecting complex financial crimes.

Details

UN Sustainable Development Goals (SDGs)

This output has contributed to the advancement of the following goals:

#16 Peace, Justice and Strong Institutions

Metrics

1 File views/ downloads
4 Record Views
Logo image