Transparency and Privacy: The Role of Explainable AI and Federated
Learning in Financial Fraud Detection
- URL: http://arxiv.org/abs/2312.13334v1
- Date: Wed, 20 Dec 2023 18:26:59 GMT
- Title: Transparency and Privacy: The Role of Explainable AI and Federated
Learning in Financial Fraud Detection
- Authors: Tomisin Awosika, Raj Mani Shukla, and Bernardi Pranggono
- Abstract summary: This research introduces a novel approach using Federated Learning (FL) and Explainable AI (XAI) to address these challenges.
FL enables financial institutions to collaboratively train a model to detect fraudulent transactions without directly sharing customer data.
XAI ensures that the predictions made by the model can be understood and interpreted by human experts, adding a layer of transparency and trust to the system.
- Score: 0.9831489366502302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fraudulent transactions and how to detect them remain a significant problem
for financial institutions around the world. The need for advanced fraud
detection systems to safeguard assets and maintain customer trust is paramount
for financial institutions, but some factors make the development of effective
and efficient fraud detection systems a challenge. One of such factors is the
fact that fraudulent transactions are rare and that many transaction datasets
are imbalanced; that is, there are fewer significant samples of fraudulent
transactions than legitimate ones. This data imbalance can affect the
performance or reliability of the fraud detection model. Moreover, due to the
data privacy laws that all financial institutions are subject to follow,
sharing customer data to facilitate a higher-performing centralized model is
impossible. Furthermore, the fraud detection technique should be transparent so
that it does not affect the user experience. Hence, this research introduces a
novel approach using Federated Learning (FL) and Explainable AI (XAI) to
address these challenges. FL enables financial institutions to collaboratively
train a model to detect fraudulent transactions without directly sharing
customer data, thereby preserving data privacy and confidentiality. Meanwhile,
the integration of XAI ensures that the predictions made by the model can be
understood and interpreted by human experts, adding a layer of transparency and
trust to the system. Experimental results, based on realistic transaction
datasets, reveal that the FL-based fraud detection system consistently
demonstrates high performance metrics. This study grounds FL's potential as an
effective and privacy-preserving tool in the fight against fraud.
Related papers
- Evaluating Fairness in Transaction Fraud Models: Fairness Metrics, Bias Audits, and Challenges [3.499319293058353]
Despite extensive research on algorithmic fairness, there is a notable gap in the study of bias in fraud detection models.
These challenges include the need for fairness metrics that account for fraud data's imbalanced nature and the tradeoff between fraud protection and service quality.
We present a comprehensive fairness evaluation of transaction fraud models using public synthetic datasets.
arXiv Detail & Related papers (2024-09-06T16:08:27Z) - Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - Starlit: Privacy-Preserving Federated Learning to Enhance Financial
Fraud Detection [2.436659710491562]
Federated Learning (FL) is a data-minimization approach enabling collaborative model training across diverse clients with local data.
State-of-the-art FL solutions to identify fraudulent financial transactions exhibit a subset of the following limitations.
We introduce Starlit, a novel scalable privacy-preserving FL mechanism that overcomes these limitations.
arXiv Detail & Related papers (2024-01-19T15:37:11Z) - Privacy-Preserving Financial Anomaly Detection via Federated Learning & Multi-Party Computation [17.314619091307343]
We describe a privacy-preserving framework that allows financial institutions to jointly train highly accurate anomaly detection models.
We show that our solution enables the network to train a highly accurate anomaly detection model while preserving privacy of customer data.
arXiv Detail & Related papers (2023-10-06T19:16:41Z) - Transaction Fraud Detection via an Adaptive Graph Neural Network [64.9428588496749]
We propose an Adaptive Sampling and Aggregation-based Graph Neural Network (ASA-GNN) that learns discriminative representations to improve the performance of transaction fraud detection.
A neighbor sampling strategy is performed to filter noisy nodes and supplement information for fraudulent nodes.
Experiments on three real financial datasets demonstrate that the proposed method ASA-GNN outperforms state-of-the-art ones.
arXiv Detail & Related papers (2023-07-11T07:48:39Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Application of Deep Reinforcement Learning to Payment Fraud [0.0]
A typical fraud detection system employs standard supervised learning methods where the focus is on maximizing the fraud recall rate.
We argue that such a formulation can lead to suboptimal solutions.
We formulate fraud detection as a sequential decision-making problem by including the utility within the model in the form of the reward function.
arXiv Detail & Related papers (2021-12-08T11:30:53Z) - Relational Graph Neural Networks for Fraud Detection in a Super-App
environment [53.561797148529664]
We propose a framework of relational graph convolutional networks methods for fraudulent behaviour prevention in the financial services of a Super-App.
We use an interpretability algorithm for graph neural networks to determine the most important relations to the classification task of the users.
Our results show that there is an added value when considering models that take advantage of the alternative data of the Super-App and the interactions found in their high connectivity.
arXiv Detail & Related papers (2021-07-29T00:02:06Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - A Semi-supervised Graph Attentive Network for Financial Fraud Detection [30.645390612737266]
We propose a semi-supervised attentive graph neural network, namedSemiSemiGNN, to utilize the multi-view labeled and unlabeled data for fraud detection.
By utilizing the social relations and the user attributes, our method can achieve a better accuracy compared with the state-of-the-art methods on two tasks.
arXiv Detail & Related papers (2020-02-28T10:35:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.