A New Interpretable Neural Network-Based Rule Model for Healthcare
Decision Making
- URL: http://arxiv.org/abs/2309.11101v1
- Date: Wed, 20 Sep 2023 07:15:48 GMT
- Title: A New Interpretable Neural Network-Based Rule Model for Healthcare
Decision Making
- Authors: Adrien Benamira, Tristan Guerand, Thomas Peyrin
- Abstract summary: In this study, we introduce a neural network framework, $textitTruth Table rules$ (TT-rules), that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks.
TT-rules is built upon $textitTruth Table nets$ (TTnet), a family of deep neural networks initially developed for formal verification.
- Score: 5.666761232081187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In healthcare applications, understanding how machine/deep learning models
make decisions is crucial. In this study, we introduce a neural network
framework, $\textit{Truth Table rules}$ (TT-rules), that combines the global
and exact interpretability properties of rule-based models with the high
performance of deep neural networks. TT-rules is built upon $\textit{Truth
Table nets}$ (TTnet), a family of deep neural networks initially developed for
formal verification. By extracting the necessary and sufficient rules
$\mathcal{R}$ from the trained TTnet model (global interpretability) to yield
the same output as the TTnet (exact interpretability), TT-rules effectively
transforms the neural network into a rule-based model. This rule-based model
supports binary classification, multi-label classification, and regression
tasks for small to large tabular datasets. After outlining the framework, we
evaluate TT-rules' performance on healthcare applications and compare it to
state-of-the-art rule-based methods. Our results demonstrate that TT-rules
achieves equal or higher performance compared to other interpretable methods.
Notably, TT-rules presents the first accurate rule-based model capable of
fitting large tabular datasets, including two real-life DNA datasets with over
20K features.
Related papers
- Neural Network-Based Rule Models With Truth Tables [5.187307904567701]
We introduce a neural network framework that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks.
Our proposed framework, called $textitTruth Table rules$ (TT-rules), is built upon $textitTruth Table nets$ (TTnets)
arXiv Detail & Related papers (2023-09-18T10:13:59Z) - Neuro-symbolic Rule Learning in Real-world Classification Tasks [75.0907310059298]
We extend pix2rule's neural DNF module to support rule learning in real-world multi-class and multi-label classification tasks.
We propose a novel extended model called neural DNF-EO (Exactly One) which enforces mutual exclusivity in multi-class classification.
arXiv Detail & Related papers (2023-03-29T13:27:14Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on
Riemannian Gradient Descent With Illustrations of Speech Processing [74.31472195046099]
We exploit a low-rank tensor-train deep neural network (TT-DNN) to build an end-to-end deep learning pipeline, namely LR-TT-DNN.
A hybrid model combining LR-TT-DNN with a convolutional neural network (CNN) is set up to boost the performance.
Our empirical evidence demonstrates that the LR-TT-DNN and CNN+(LR-TT-DNN) models with fewer model parameters can outperform the TT-DNN and CNN+(LR-TT-DNN) counterparts.
arXiv Detail & Related papers (2022-03-11T15:55:34Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - Integrating Regular Expressions with Neural Networks via DFA [40.09868407372605]
It is very important to integrate the rule knowledge into neural networks to build a hybrid model that achieves better performance.
Specifically, the human-designed rules are formulated as Regular Expressions (REs)
We propose to use the MDFA as an intermediate model to capture the matched RE patterns as rule-based features for each input sentence.
arXiv Detail & Related papers (2021-09-07T05:48:51Z) - Learning Accurate and Interpretable Decision Rule Sets from Neural
Networks [5.280792199222362]
This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification.
We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layer architecture.
arXiv Detail & Related papers (2021-03-04T04:10:19Z) - Learning Reasoning Strategies in End-to-End Differentiable Proving [50.9791149533921]
Conditional Theorem Provers learn optimal rule selection strategy via gradient-based optimisation.
We show that Conditional Theorem Provers are scalable and yield state-of-the-art results on the CLUTRR dataset.
arXiv Detail & Related papers (2020-07-13T16:22:14Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.