Explainable Artificial Intelligence for Financial Crime Prevention: Translating Machine Learning Outputs into Regulatory and Compliance Decision-Making

Authors

  • Okolie Awele School of Computing and Data Science, Wentworth Institute of Technology, Boston, USA Author
  • Daniel Oghenekome Erebi Department of Electrical and Electronic Engineering, Federal University of Petroleum Resources, Effurun, Nigeria Author
  • Bright Kofi Ladzro Department of Mathematics and Statistics, College of Arts and Sciences, American University, Washington, DC, United States Author
  • Oluwatosin Lawal Department of Mathematics Statistical Analytics, Computing and Modeling, Texas A&M University, Kingsville, USA Author
  • Didunoluwa Olukoya Independent Researcher, USA Author
  • Samson Onaopemipo Amoran Department of Computer Science, Western Illinois University, USA Author

DOI:

https://doi.org/10.32628/IJSRST2613125

Keywords:

Explainable Artificial Intelligence, Financial Crime Prevention, Regulatory Compliance, Decision Support Systems, Fraud Detection, Governance

Abstract

Financial institutions increasingly rely on machine learning systems to detect fraudulent activities and prevent financial crime across complex transactional environments. Models like these are exemplified by their remarkable prediction accuracy, but at the same time, their lack of interpretability leads to considerable problems in terms of compliance with regulations, auditing, and operational trust. In the case of financial transactions that carry a high level of risk, regulators and compliance professionals want not just precise risk forecasts but also the reasoning behind the decisions to be open and understandable. The presented research introduces a regulatory-compliant explainable artificial intelligence (XAI) framework that connects machine learning results and financial crime decision-making processes. The framework does not innovate but rather focuses on converting risk scores and explainability outputs of predictive models into interpretable decision artifacts that can be easily reviewed for compliance, supervisory oversight, and human-in-the-loop validation. The proposed methodology combines explainability mechanisms with governance-oriented design principles, allowing for consistent justification of flagged transactions, enhanced audit trails, and increased accountability in automated systems for financial crimes detection and prevention. Case study illustrations demonstrate how explainable AI can support escalation, investigation, and reporting decisions in fraud and anti-money laundering contexts. The findings highlight the role of explainable AI as a critical enabler for aligning machine learning innovation with regulatory expectations, contributing to more transparent, trustworthy, and responsible financial crime prevention systems.

Downloads

Download data is not yet available.

References

Basel Committee on Banking Supervision. (2015). Sound management of risks related to money laundering and financing of terrorism. Bank for International Settlements. https://www.bis.org/bcbs/publ/d405.htm

Bahnsen, A. C., Aouada, D., & Ottersten, B. (2015). Cost-sensitive decision trees for fraud detection. Expert Systems with Applications, 42(13), 5558–5567. https://doi.org/10.1016/j.eswa.2015.02.023 DOI: https://doi.org/10.1016/j.eswa.2015.02.023

Bahnsen, A. C., Aouada, D., & Ottersten, B. (2015). Ensemble of example-dependent cost-sensitive decision trees [Preprint]. arXiv. https://arxiv.org/abs/1505.04637

Dal Pozzolo, A., Bontempi, G., Snoeck, M., & Waterschoot, S. (2018). Adversarial drift detection in fraud detection. IEEE Computational Intelligence Magazine, 13(3), 38–48. https://doi.org/10.1109/MCI.2018.2840734

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning [Preprint]. arXiv. https://arxiv.org/abs/1702.08608

Financial Action Task Force. (2021). Risk-based approach for anti-money laundering and counter-terrorist financing. FATF. https://www.fatf-gafi.org/recommendations.html

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), Article 93. https://doi.org/10.1145/3236009 DOI: https://doi.org/10.1145/3236009

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 4765–4774. https://arxiv.org/abs/1705.07874

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). https://doi.org/10.1145/2939672.2939778 DOI: https://doi.org/10.1145/2939672.2939778

Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x DOI: https://doi.org/10.1038/s42256-019-0048-x

Downloads

Published

12-02-2026

Issue

Section

Research Articles

How to Cite

[1]
Okolie Awele, Daniel Oghenekome Erebi, Bright Kofi Ladzro, Oluwatosin Lawal, Didunoluwa Olukoya, and Samson Onaopemipo Amoran, Trans., “Explainable Artificial Intelligence for Financial Crime Prevention: Translating Machine Learning Outputs into Regulatory and Compliance Decision-Making”, Int J Sci Res Sci & Technol, vol. 13, no. 1, pp. 256–263, Feb. 2026, doi: 10.32628/IJSRST2613125.