Monotonic Neural Additive Models: Pursuing Regulated Machine Learning
Models for Credit Scoring
- URL: http://arxiv.org/abs/2209.10070v1
- Date: Wed, 21 Sep 2022 02:14:09 GMT
- Title: Monotonic Neural Additive Models: Pursuing Regulated Machine Learning
Models for Credit Scoring
- Authors: Dangxing Chen and Weicheng Ye
- Abstract summary: We introduce a novel class of monotonic neural additive models, which meet regulatory requirements by simplifying neural network architecture and enforcing monotonicity.
Our new model is as accurate as black-box fully-connected neural networks, providing a highly accurate and regulated machine learning method.
- Score: 1.90365714903665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The forecasting of credit default risk has been an active research field for
several decades. Historically, logistic regression has been used as a major
tool due to its compliance with regulatory requirements: transparency,
explainability, and fairness. In recent years, researchers have increasingly
used complex and advanced machine learning methods to improve prediction
accuracy. Even though a machine learning method could potentially improve the
model accuracy, it complicates simple logistic regression, deteriorates
explainability, and often violates fairness. In the absence of compliance with
regulatory requirements, even highly accurate machine learning methods are
unlikely to be accepted by companies for credit scoring. In this paper, we
introduce a novel class of monotonic neural additive models, which meet
regulatory requirements by simplifying neural network architecture and
enforcing monotonicity. By utilizing the special architectural features of the
neural additive model, the monotonic neural additive model penalizes
monotonicity violations effectively. Consequently, the computational cost of
training a monotonic neural additive model is similar to that of training a
neural additive model, as a free lunch. We demonstrate through empirical
results that our new model is as accurate as black-box fully-connected neural
networks, providing a highly accurate and regulated machine learning method.
Related papers
- Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning [38.09011520275557]
Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones.
We propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL.
arXiv Detail & Related papers (2024-06-04T15:47:03Z) - Learning to Continually Learn with the Bayesian Principle [36.75558255534538]
In this work, we adopt the meta-learning paradigm to combine the strong representational power of neural networks and simple statistical models' robustness to forgetting.
Since the neural networks remain fixed during continual learning, they are protected from catastrophic forgetting.
arXiv Detail & Related papers (2024-05-29T04:53:31Z) - Extreme sparsification of physics-augmented neural networks for
interpretable model discovery in mechanics [0.0]
We propose to train regularized physics-augmented neural network-based models utilizing a smoothed version of $L0$-regularization.
We show that the method can reliably obtain interpretable and trustworthy models for compressible and incompressible thermodynamicity, yield functions, and hardening models for elastoplasticity.
arXiv Detail & Related papers (2023-10-05T16:28:58Z) - Epistemic Modeling Uncertainty of Rapid Neural Network Ensembles for
Adaptive Learning [0.0]
A new type of neural network is presented using the rapid neural network paradigm.
It is found that the proposed emulator embedded neural network trains near-instantaneously, typically without loss of prediction accuracy.
arXiv Detail & Related papers (2023-09-12T22:34:34Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Online model error correction with neural networks in the incremental
4D-Var framework [0.0]
We develop a new weak-constraint 4D-Var formulation which can be used to train a neural network for online model error correction.
The method is implemented in the ECMWF Object-Oriented Prediction System.
The results confirm that online learning is effective and yields a more accurate model error correction than offline learning.
arXiv Detail & Related papers (2022-10-25T07:45:33Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.