A Privacy-Preserving Hybrid Federated Learning Framework for Financial
Crime Detection
- URL: http://arxiv.org/abs/2302.03654v3
- Date: Tue, 18 Apr 2023 19:28:38 GMT
- Title: A Privacy-Preserving Hybrid Federated Learning Framework for Financial
Crime Detection
- Authors: Haobo Zhang, Junyuan Hong, Fan Dong, Steve Drew, Liangjie Xue, Jiayu
Zhou
- Abstract summary: We propose a hybrid federated learning system that offers secure and privacy-aware learning and inference for financial crime detection.
We conduct extensive empirical studies to evaluate the proposed framework's detection performance and privacy-protection capability.
- Score: 27.284477227066972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent decade witnessed a surge of increase in financial crimes across
the public and private sectors, with an average cost of scams of $102m to
financial institutions in 2022. Developing a mechanism for battling financial
crimes is an impending task that requires in-depth collaboration from multiple
institutions, and yet such collaboration imposed significant technical
challenges due to the privacy and security requirements of distributed
financial data. For example, consider the modern payment network systems, which
can generate millions of transactions per day across a large number of global
institutions. Training a detection model of fraudulent transactions requires
not only secured transactions but also the private account activities of those
involved in each transaction from corresponding bank systems. The distributed
nature of both samples and features prevents most existing learning systems
from being directly adopted to handle the data mining task. In this paper, we
collectively address these challenges by proposing a hybrid federated learning
system that offers secure and privacy-aware learning and inference for
financial crime detection. We conduct extensive empirical studies to evaluate
the proposed framework's detection performance and privacy-protection
capability, evaluating its robustness against common malicious attacks of
collaborative learning. We release our source code at
https://github.com/illidanlab/HyFL .
Related papers
- Ten Challenging Problems in Federated Foundation Models [55.343738234307544]
Federated Foundation Models (FedFMs) represent a distributed learning paradigm that fuses general competences of foundation models as well as privacy-preserving capabilities of federated learning.
This paper provides a comprehensive summary of the ten challenging problems inherent in FedFMs, encompassing foundational theory, utilization of private data, continual learning, unlearning, Non-IID and graph data, bidirectional knowledge transfer, incentive mechanism design, game mechanism design, model watermarking, and efficiency.
arXiv Detail & Related papers (2025-02-14T04:01:15Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [46.404531555921906]
We propose an architecture for blockchain-based PAISs aimed at preserving both confidentiality and transparency.
Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - DPFedBank: Crafting a Privacy-Preserving Federated Learning Framework for Financial Institutions with Policy Pillars [0.09363323206192666]
This paper presents DPFedBank, an innovative framework enabling financial institutions to collaboratively develop machine learning models.
DPFedBank is designed to address the unique privacy and security challenges associated with financial data, allowing institutions to share insights without exposing sensitive information.
arXiv Detail & Related papers (2024-10-17T16:51:56Z) - Privacy Technologies for Financial Intelligence [6.287201938212411]
Financial crimes like terrorism financing and money laundering can have real impacts on society.
Data related to different pieces of the overall puzzle is usually distributed across a network of financial institutions, regulators, and law-enforcement agencies.
Recent advances in Privacy-Preserving Data Matching and Machine Learning provide an opportunity for regulators and the financial industry to come together.
arXiv Detail & Related papers (2024-08-19T12:13:53Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - Locally Differentially Private Embedding Models in Distributed Fraud
Prevention Systems [2.001149416674759]
We present a collaborative deep learning framework for fraud prevention, designed from a privacy standpoint, and awarded at the recent PETs Prize Challenges.
We leverage latent embedded representations of varied-length transaction sequences, along with local differential privacy, in order to construct a data release mechanism which can securely inform externally hosted fraud and anomaly detection models.
We assess our contribution on two distributed data sets donated by large payment networks, and demonstrate robustness to popular inference-time attacks, along with utility-privacy trade-offs analogous to published work in alternative application domains.
arXiv Detail & Related papers (2024-01-03T14:04:18Z) - Transparency and Privacy: The Role of Explainable AI and Federated
Learning in Financial Fraud Detection [0.9831489366502302]
This research introduces a novel approach using Federated Learning (FL) and Explainable AI (XAI) to address these challenges.
FL enables financial institutions to collaboratively train a model to detect fraudulent transactions without directly sharing customer data.
XAI ensures that the predictions made by the model can be understood and interpreted by human experts, adding a layer of transparency and trust to the system.
arXiv Detail & Related papers (2023-12-20T18:26:59Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - On some studies of Fraud Detection Pipeline and related issues from the
scope of Ensemble Learning and Graph-based Learning [0.5820960526832067]
The UK anti-fraud charity Fraud Advisory Panel estimates business costs of fraud at 144 billion.
Building an efficient fraud detection system is challenging due to many difficult problems, e.g.imbalanced data, computing costs, etc.
arXiv Detail & Related papers (2022-05-10T02:13:58Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.