Propensity-to-Pay: Machine Learning for Estimating Prediction
Uncertainty
- URL: http://arxiv.org/abs/2008.12065v1
- Date: Thu, 27 Aug 2020 11:49:25 GMT
- Title: Propensity-to-Pay: Machine Learning for Estimating Prediction
Uncertainty
- Authors: Md Abul Bashar, Astin-Walmsley Kieren, Heath Kerina, Richi Nayak
- Abstract summary: This study investigates machine learning models' ability to consider different contexts and estimate the uncertainty in the prediction.
A novel concept of utilising a Baysian Neural Network to the binary classification problem of propensity-to-pay energy bills is proposed and explored for deployment.
- Score: 1.452875650827562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting a customer's propensity-to-pay at an early point in the revenue
cycle can provide organisations many opportunities to improve the customer
experience, reduce hardship and reduce the risk of impaired cash flow and
occurrence of bad debt. With the advancements in data science; machine learning
techniques can be used to build models to accurately predict a customer's
propensity-to-pay. Creating effective machine learning models without access to
large and detailed datasets presents some significant challenges. This paper
presents a case-study, conducted on a dataset from an energy organisation, to
explore the uncertainty around the creation of machine learning models that are
able to predict residential customers entering financial hardship which then
reduces their ability to pay energy bills. Incorrect predictions can result in
inefficient resource allocation and vulnerable customers not being proactively
identified. This study investigates machine learning models' ability to
consider different contexts and estimate the uncertainty in the prediction.
Seven models from four families of machine learning algorithms are investigated
for their novel utilisation. A novel concept of utilising a Baysian Neural
Network to the binary classification problem of propensity-to-pay energy bills
is proposed and explored for deployment.
Related papers
- Promoting User Data Autonomy During the Dissolution of a Monopolistic Firm [5.864623711097197]
We show how the framework of Conscious Data Contribution can enable user autonomy during under dissolution.
We explore how fine-tuning and the phenomenon of "catastrophic forgetting" could actually prove beneficial as a type of machine unlearning.
arXiv Detail & Related papers (2024-11-20T18:55:51Z) - Learning-Augmented Robust Algorithmic Recourse [7.217269034256654]
Algorithmic recourse provides suggestions of minimum-cost improvements to achieve a desirable outcome in the future.
Machine learning models often get updated over time and this can cause a recourse to become invalid.
We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.
arXiv Detail & Related papers (2024-10-02T14:15:32Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - Privacy-Preserving Financial Anomaly Detection via Federated Learning & Multi-Party Computation [17.314619091307343]
We describe a privacy-preserving framework that allows financial institutions to jointly train highly accurate anomaly detection models.
We show that our solution enables the network to train a highly accurate anomaly detection model while preserving privacy of customer data.
arXiv Detail & Related papers (2023-10-06T19:16:41Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - Analyzing Machine Learning Models for Credit Scoring with Explainable AI
and Optimizing Investment Decisions [0.0]
This paper examines two different yet related questions related to explainable AI (XAI) practices.
The study compares various machine learning models, including single classifiers (logistic regression, decision trees, LDA, QDA), heterogeneous ensembles (AdaBoost, Random Forest), and sequential neural networks.
Two advanced post-hoc model explainability techniques - LIME and SHAP are utilized to assess ML-based credit scoring models.
arXiv Detail & Related papers (2022-09-19T21:44:42Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Sequential Deep Learning for Credit Risk Monitoring with Tabular
Financial Data [0.901219858596044]
We present our attempts to create a novel approach to assessing credit risk using deep learning.
We propose a new credit card transaction sampling technique to use with deep recurrent and causal convolution-based neural networks.
We show that our sequential deep learning approach using a temporal convolutional network outperformed the benchmark non-sequential tree-based model.
arXiv Detail & Related papers (2020-12-30T21:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.