Differential Privacy for Credit Risk Model
- URL: http://arxiv.org/abs/2106.15343v1
- Date: Thu, 24 Jun 2021 09:58:49 GMT
- Title: Differential Privacy for Credit Risk Model
- Authors: Tabish Maniar, Alekhya Akkinepally, Anantha Sharma
- Abstract summary: We assess differential privacy as a solution to address privacy problems.
We evaluate one such tool from LeapYear as applied to the Credit Risk modeling domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of machine learning algorithms to model user behavior and drive
business decisions has become increasingly commonplace, specifically providing
intelligent recommendations to automated decision making. This has led to an
increase in the use of customers personal data to analyze customer behavior and
predict their interests in a companys products. Increased use of this customer
personal data can lead to better models but also to the potential of customer
data being leaked, reverse engineered, and mishandled. In this paper, we assess
differential privacy as a solution to address these privacy problems by
building privacy protections into the data engineering and model training
stages of predictive model development. Our interest is a pragmatic
implementation in an operational environment, which necessitates a general
purpose differentially private modeling framework, and we evaluate one such
tool from LeapYear as applied to the Credit Risk modeling domain. Credit Risk
Model is a major modeling methodology in banking and finance where user data is
analyzed to determine the total Expected Loss to the bank. We examine the
application of differential privacy on the credit risk model and evaluate the
performance of a Differentially Private Model with a Non Differentially Private
Model. Credit Risk Model is a major modeling methodology in banking and finance
where users data is analyzed to determine the total Expected Loss to the bank.
In this paper, we explore the application of differential privacy on the credit
risk model and evaluate the performance of a Non Differentially Private Model
with Differentially Private Model.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - The Effects of Data Imbalance Under a Federated Learning Approach for
Credit Risk Forecasting [0.0]
Credit risk forecasting plays a crucial role for commercial banks and other financial institutions in granting loans to customers.
Traditional machine learning methods require the sharing of sensitive client information with an external server to build a global model.
A newly developed privacy-preserving distributed machine learning technique known as Federated Learning (FL) allows the training of a global model without the necessity of accessing private local data directly.
arXiv Detail & Related papers (2024-01-14T09:15:10Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Machine Learning Models Evaluation and Feature Importance Analysis on
NPL Dataset [0.0]
We evaluate how different Machine learning models perform on the dataset provided by a private bank in Ethiopia.
XGBoost achieves the highest F1 score on the KMeans SMOTE over-sampled data.
arXiv Detail & Related papers (2022-08-28T17:09:44Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - The Influence of Dropout on Membership Inference in Differentially
Private Models [0.0]
Differentially private models seek to protect the privacy of data the model is trained on.
We conduct membership inference attacks against models with and without differential privacy.
arXiv Detail & Related papers (2021-03-16T12:09:51Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - Information Laundering for Model Privacy [34.66708766179596]
We propose information laundering, a novel framework for enhancing model privacy.
Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use.
arXiv Detail & Related papers (2020-09-13T23:24:08Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.