A Privacy-Preserving Federated Framework with Hybrid Quantum-Enhanced Learning for Financial Fraud Detection
- URL: http://arxiv.org/abs/2507.22908v1
- Date: Tue, 15 Jul 2025 17:29:12 GMT
- Title: A Privacy-Preserving Federated Framework with Hybrid Quantum-Enhanced Learning for Financial Fraud Detection
- Authors: Abhishek Sawaika, Swetang Krishna, Tushar Tomar, Durga Pritam Suggisetti, Aditi Lal, Tanmaya Shrivastav, Nouhaila Innan, Muhammad Shafique,
- Abstract summary: We introduce a specialised federated learning framework that combines a quantum-enhanced Long Short-Term Memory (LSTM) model with advanced privacy preserving techniques.<n>By integrating quantum layers into the LSTM architecture, our approach adeptly captures complex cross-transactional patters, resulting in an approximate 5% performance improvement.<n>This pseudo-centralised setup with a Quantum LSTM model, enhances fraud detection accuracy and reinforces the security and confidentiality of sensitive financial data.
- Score: 2.9447042849184495
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Rapid growth of digital transactions has led to a surge in fraudulent activities, challenging traditional detection methods in the financial sector. To tackle this problem, we introduce a specialised federated learning framework that uniquely combines a quantum-enhanced Long Short-Term Memory (LSTM) model with advanced privacy preserving techniques. By integrating quantum layers into the LSTM architecture, our approach adeptly captures complex cross-transactional patters, resulting in an approximate 5% performance improvement across key evaluation metrics compared to conventional models. Central to our framework is "FedRansel", a novel method designed to defend against poisoning and inference attacks, thereby reducing model degradation and inference accuracy by 4-8%, compared to standard differential privacy mechanisms. This pseudo-centralised setup with a Quantum LSTM model, enhances fraud detection accuracy and reinforces the security and confidentiality of sensitive financial data.
Related papers
- TensoMeta-VQC: A Tensor-Train-Guided Meta-Learning Framework for Robust and Scalable Variational Quantum Computing [60.996803677584424]
TensoMeta-VQC is a novel tensor-train (TT)-guided meta-learning framework designed to improve the robustness and scalability of VQC significantly.<n>Our framework fully delegates the generation of quantum circuit parameters to a classical TT network, effectively decoupling optimization from quantum hardware.
arXiv Detail & Related papers (2025-08-01T23:37:55Z) - FD4QC: Application of Classical and Quantum-Hybrid Machine Learning for Financial Fraud Detection A Technical Report [36.1999598554273]
This report investigates and compares the efficacy of classical, quantum, and quantum-hybrid machine learning models for the binary behavioural classification of fraudulent financial activities.<n>We implement and evaluate a range of models on the IBM Anti-Money Laundering (AML) dataset.<n>We propose Fraud Detection for Quantum Computing (FD4QC), a practical, API-driven system architecture designed for real-world deployment.
arXiv Detail & Related papers (2025-07-25T16:08:22Z) - QFDNN: A Resource-Efficient Variational Quantum Feature Deep Neural Networks for Fraud Detection and Loan Prediction [22.867189884561768]
We propose a quantum feature deep neural network (QFDNN) to solve credit card fraud detection and loan eligibility prediction challenges.<n>QFDNN is resource efficient, noise-resilient, and supports sustainability through its resource-efficient design and minimal computational overhead.<n>Our findings highlight QFDNN potential to enhance trust and security in social financial technology by accurately detecting fraudulent transactions.
arXiv Detail & Related papers (2025-04-28T09:47:28Z) - Enhancing LLM Reliability via Explicit Knowledge Boundary Modeling [48.15636223774418]
Large language models (LLMs) are prone to hallucination stemming from misaligned self-awareness.<n>We propose the Explicit Knowledge Boundary Modeling framework to integrate fast and slow reasoning systems to harmonize reliability and usability.
arXiv Detail & Related papers (2025-03-04T03:16:02Z) - Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems [89.35169042718739]
collaborative inference enables end users to leverage powerful deep learning models without exposure of sensitive raw data to cloud servers.<n>Recent studies have revealed that these intermediate features may not sufficiently preserve privacy, as information can be leaked and raw data can be reconstructed via model inversion attacks (MIAs)<n>This work first theoretically proves that the conditional entropy of inputs given intermediate features provides a guaranteed lower bound on the reconstruction mean square error (MSE) under any MIA.<n>Then, we derive a differentiable and solvable measure for bounding this conditional entropy based on the Gaussian mixture estimation and propose a conditional entropy algorithm to enhance the inversion robustness
arXiv Detail & Related papers (2025-03-01T07:15:21Z) - Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Towards Scalable Quantum Key Distribution: A Machine Learning-Based Cascade Protocol Approach [2.363573186878154]
Quantum Key Distribution (QKD) is a pivotal technology in the quest for secure communication.
Traditional key rate determination methods, dependent on complex mathematical models, often fall short in efficiency and scalability.
We propose an approach that involves integrating machine learning (ML) techniques with the Cascade error correction protocol.
arXiv Detail & Related papers (2024-09-12T13:40:08Z) - QFNN-FFD: Quantum Federated Neural Network for Financial Fraud Detection [4.2435928520499635]
This study introduces the Quantum Federated Neural Network for Financial Fraud Detection (QFNN-FFD)<n>QFNN-FFD is a framework merging Quantum Machine Learning (QML) and quantum computing with Federated Learning (FL) for financial fraud detection.<n>Using quantum technologies' computational power and the robust data privacy protections offered by FL, QFNN-FFD emerges as a secure and efficient method for identifying fraudulent transactions.
arXiv Detail & Related papers (2024-04-03T09:19:46Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Multi-Domain Polarization for Enhancing the Physical Layer Security of MIMO Systems [51.125572358881556]
A novel Physical Layer Security (PLS) framework is conceived for enhancing the security of wireless communication systems.
We design a sophisticated key generation scheme based on multi-domain polarization, and the corresponding receivers.
Our findings indicate that the innovative PLS framework effectively enhances the security and reliability of wireless communication systems.
arXiv Detail & Related papers (2023-10-31T05:50:24Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.