Uncertainty-Aware Credit Card Fraud Detection Using Deep Learning
- URL: http://arxiv.org/abs/2107.13508v1
- Date: Wed, 28 Jul 2021 17:30:46 GMT
- Title: Uncertainty-Aware Credit Card Fraud Detection Using Deep Learning
- Authors: Maryam Habibpour, Hassan Gharoun, Mohammadreza Mehdipour, AmirReza
Tajally, Hamzeh Asgharnezhad, Afshar Shamsi, Abbas Khosravi, Miadreza
Shafie-Khah, Saeid Nahavandi, and Joao P.S. Catalao
- Abstract summary: This study proposes three uncertainty quantification (UQ) techniques named Monte Carlo dropout, ensemble, and ensemble Monte Carlo dropout for card fraud detection applied on transaction data.
We show that the ensemble is more effective in capturing uncertainty corresponding to generated predictions.
- Score: 10.681661545798157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Countless research works of deep neural networks (DNNs) in the task of credit
card fraud detection have focused on improving the accuracy of point
predictions and mitigating unwanted biases by building different network
architectures or learning models. Quantifying uncertainty accompanied by point
estimation is essential because it mitigates model unfairness and permits
practitioners to develop trustworthy systems which abstain from suboptimal
decisions due to low confidence. Explicitly, assessing uncertainties associated
with DNNs predictions is critical in real-world card fraud detection settings
for characteristic reasons, including (a) fraudsters constantly change their
strategies, and accordingly, DNNs encounter observations that are not generated
by the same process as the training distribution, (b) owing to the
time-consuming process, very few transactions are timely checked by
professional experts to update DNNs. Therefore, this study proposes three
uncertainty quantification (UQ) techniques named Monte Carlo dropout, ensemble,
and ensemble Monte Carlo dropout for card fraud detection applied on
transaction data. Moreover, to evaluate the predictive uncertainty estimates,
UQ confusion matrix and several performance metrics are utilized. Through
experimental results, we show that the ensemble is more effective in capturing
uncertainty corresponding to generated predictions. Additionally, we
demonstrate that the proposed UQ methods provide extra insight to the point
predictions, leading to elevate the fraud prevention process.
Related papers
- Explainability through uncertainty: Trustworthy decision-making with neural networks [1.104960878651584]
Uncertainty is a key feature of any machine learning model.
It is particularly important in neural networks, which tend to be overconfident.
Uncertainty as XAI improves the model's trustworthiness in downstream decision-making tasks.
arXiv Detail & Related papers (2024-03-15T10:22:48Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Improving Out-of-Distribution Detection via Epistemic Uncertainty
Adversarial Training [29.4569172720654]
We develop a simple adversarial training scheme that incorporates an attack of the uncertainty predicted by the dropout ensemble.
We demonstrate this method improves OOD detection performance on standard data (i.e., not adversarially crafted), and improves the standardized partial AUC from near-random guessing performance to $geq 0.75$.
arXiv Detail & Related papers (2022-09-05T14:32:19Z) - Learning Uncertainty with Artificial Neural Networks for Improved
Predictive Process Monitoring [0.114219428942199]
We distinguish two types of learnable uncertainty: model uncertainty due to a lack of training data and noise-induced observational uncertainty.
Our contribution is to apply these uncertainty concepts to predictive process monitoring tasks to train uncertainty-based models to predict the remaining time and outcomes.
arXiv Detail & Related papers (2022-06-13T17:05:27Z) - Gradient-Based Quantification of Epistemic Uncertainty for Deep Object
Detectors [8.029049649310213]
We introduce novel gradient-based uncertainty metrics and investigate them for different object detection architectures.
Experiments show significant improvements in true positive / false positive discrimination and prediction of intersection over union.
We also find improvement over Monte-Carlo dropout uncertainty metrics and further significant boosts by aggregating different sources of uncertainty metrics.
arXiv Detail & Related papers (2021-07-09T16:04:11Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Towards Credit-Fraud Detection via Sparsely Varying Gaussian
Approximations [0.0]
We propose a credit card fraud detection concept incorporating the uncertainty in our prediction system to ensure better judgment in such a crucial task.
We perform the same with different sets of kernels and the different number of inducing data points to show the best accuracy was obtained.
arXiv Detail & Related papers (2020-07-14T16:56:06Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.