Machine learning for fraud detection in digital banking: a systematic literature review REVIEW
- URL: http://arxiv.org/abs/2510.05167v1
- Date: Sat, 04 Oct 2025 22:56:12 GMT
- Title: Machine learning for fraud detection in digital banking: a systematic literature review REVIEW
- Authors: Md Zahin Hossain George, Md Khorshed Alam, Md Tarek Hasan,
- Abstract summary: This systematic literature review examines the role of machine learning in fraud detection within digital banking.<n>Following the PRISMA guidelines, the review applied a structured identification, screening, eligibility, and inclusion process to ensure methodological rigor and transparency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This systematic literature review examines the role of machine learning in fraud detection within digital banking, synthesizing evidence from 118 peer-reviewed studies and institutional reports. Following the PRISMA guidelines, the review applied a structured identification, screening, eligibility, and inclusion process to ensure methodological rigor and transparency. The findings reveal that supervised learning methods, such as decision trees, logistic regression, and support vector machines, remain the dominant paradigm due to their interpretability and established performance, while unsupervised anomaly detection approaches are increasingly adopted to address novel fraud patterns in highly imbalanced datasets. Deep learning architectures, particularly recurrent and convolutional neural networks, have emerged as transformative tools capable of modeling sequential transaction data and detecting complex fraud typologies, though challenges of interpretability and real-time deployment persist. Hybrid models that combine supervised, unsupervised, and deep learning strategies demonstrate superior adaptability and detection accuracy, highlighting their potential as convergent solutions.
Related papers
- Mechanistic Interpretability for Large Language Model Alignment: Progress, Challenges, and Future Directions [16.821238326410324]
Large language models (LLMs) have achieved remarkable capabilities across diverse tasks, yet their internal decision-making processes remain largely opaque.<n>Mechanistic interpretability has emerged as a critical research direction for understanding and aligning these models.<n>We analyze how interpretability insights have informed alignment strategies including reinforcement learning from human feedback, constitutional AI, and scalable oversight.
arXiv Detail & Related papers (2026-01-21T11:43:57Z) - Explainable AI in Big Data Fraud Detection [3.5429848204449694]
This paper examines how explainable artificial intelligence (XAI) can be integrated into Big Data analytics pipelines for fraud detection and risk management.<n>We review key Big Data characteristics and survey major analytical tools, including distributed storage systems, streaming platforms, and advanced fraud detection models.<n>We identify key research gaps related to scalability, real-time processing, and explainability for graph and temporal models.<n>The paper concludes with open research directions in scalable XAI, privacy-aware explanations, and standardized evaluation methods for explainable fraud detection systems.
arXiv Detail & Related papers (2025-12-17T23:40:54Z) - A Theory of the Mechanics of Information: Generalization Through Measurement of Uncertainty (Learning is Measuring) [0.0]
We introduce a model-free framework using surprisal (information theoretic uncertainty) to analyze and perform inferences from raw data.<n>It eliminates distribution modeling, reducing bias, and enabling efficient updates including direct edits and deletion of training data.<n>It emphasizes traceability, interpretability, and data-driven decision making, offering a unified, human-understandable framework for machine learning.
arXiv Detail & Related papers (2025-10-26T19:45:25Z) - From Physics to Machine Learning and Back: Part II - Learning and Observational Bias in PHM [52.64097278841485]
Review examines how incorporating learning and observational biases through physics-informed modeling and data strategies can guide models toward physically consistent and reliable predictions.<n>Fast adaptation methods including meta-learning and few-shot learning are reviewed alongside domain generalization techniques.
arXiv Detail & Related papers (2025-09-25T14:15:43Z) - Deep Learning Models for Robust Facial Liveness Detection [56.08694048252482]
This study introduces a robust solution through novel deep learning models addressing the deficiencies in contemporary anti-spoofing techniques.<n>By innovatively integrating texture analysis and reflective properties associated with genuine human traits, our models distinguish authentic presence from replicas with remarkable precision.
arXiv Detail & Related papers (2025-08-12T17:19:20Z) - Towards Reliable Forgetting: A Survey on Machine Unlearning Verification [26.88376128769619]
This paper presents the first structured survey of machine unlearning verification methods.<n>We propose a taxonomy that organizes current techniques into two principal categories -- behavioral verification and parametric verification.<n>We examine their underlying assumptions, strengths, and limitations, and identify potential vulnerabilities in practical deployment.
arXiv Detail & Related papers (2025-06-18T03:33:59Z) - Anomaly Detection and Generation with Diffusion Models: A Survey [51.61574868316922]
Anomaly detection (AD) plays a pivotal role across diverse domains, including cybersecurity, finance, healthcare, and industrial manufacturing.<n>Recent advancements in deep learning, specifically diffusion models (DMs), have sparked significant interest.<n>This survey aims to guide researchers and practitioners in leveraging DMs for innovative AD solutions across diverse applications.
arXiv Detail & Related papers (2025-06-11T03:29:18Z) - Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction [55.914891182214475]
We introduce neural network reprogrammability as a unifying framework for model adaptation.<n>We present a taxonomy that categorizes such information manipulation approaches across four key dimensions.<n>We also analyze remaining technical challenges and ethical considerations.
arXiv Detail & Related papers (2025-06-05T05:42:27Z) - Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs [58.24692529185971]
We introduce a comprehensive auditing framework for unlearning evaluation comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods.<n>We evaluate the effectiveness and robustness of different unlearning strategies.
arXiv Detail & Related papers (2025-05-29T09:19:07Z) - Proactive Fraud Defense: Machine Learning's Evolving Role in Protecting Against Online Fraud [0.0]
This paper explores the transformative role of machine learning in fraud detection and prevention.
It highlights the strengths of machine learning in processing vast datasets, identifying intricate fraud patterns, and providing real-time predictions.
It emphasizes the potential of machine learning to revolutionize fraud detection frameworks by making them more dynamic, efficient, and capable of handling the growing complexity of fraud across various industries.
arXiv Detail & Related papers (2024-10-26T21:34:28Z) - Adversarial Challenges in Network Intrusion Detection Systems: Research Insights and Future Prospects [0.33554367023486936]
This paper provides a comprehensive review of machine learning-based Network Intrusion Detection Systems (NIDS)
We critically examine existing research in NIDS, highlighting key trends, strengths, and limitations.
We discuss emerging challenges in the field and offer insights for the development of more robust and resilient NIDS.
arXiv Detail & Related papers (2024-09-27T13:27:29Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.