Learning Accurate and Interpretable Decision Rule Sets from Neural
Networks
- URL: http://arxiv.org/abs/2103.02826v1
- Date: Thu, 4 Mar 2021 04:10:19 GMT
- Title: Learning Accurate and Interpretable Decision Rule Sets from Neural
Networks
- Authors: Litao Qiao, Weijia Wang, Bill Lin
- Abstract summary: This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification.
We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layer architecture.
- Score: 5.280792199222362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a new paradigm for learning a set of independent logical
rules in disjunctive normal form as an interpretable model for classification.
We consider the problem of learning an interpretable decision rule set as
training a neural network in a specific, yet very simple two-layer
architecture. Each neuron in the first layer directly maps to an interpretable
if-then rule after training, and the output neuron in the second layer directly
maps to a disjunction of the first-layer rules to form the decision rule set.
Our representation of neurons in this first rules layer enables us to encode
both the positive and the negative association of features in a decision rule.
State-of-the-art neural net training approaches can be leveraged for learning
highly accurate classification models. Moreover, we propose a sparsity-based
regularization approach to balance between classification accuracy and the
simplicity of the derived rules. Our experimental results show that our method
can generate more accurate decision rule sets than other state-of-the-art
rule-learning algorithms with better accuracy-simplicity trade-offs. Further,
when compared with uninterpretable black-box machine learning approaches such
as random forests and full-precision deep neural networks, our approach can
easily find interpretable decision rule sets that have comparable predictive
performance.
Related papers
- Neural Symbolic Logical Rule Learner for Interpretable Learning [1.9526476410335776]
Rule-based neural networks stand out for enabling interpretable classification by learning logical rules for both prediction and interpretation.
We introduce the Normal Form Rule Learner (NFRL) algorithm, leveraging a selective discrete neural network, to learn rules in both Conjunctive Normal Form (CNF) and Disjunctive Normal Form (DNF)
Through extensive experiments on 11 datasets, NFRL demonstrates superior classification performance, quality of learned rules, efficiency and interpretability compared to 12 state-of-the-art alternatives.
arXiv Detail & Related papers (2024-08-21T18:09:12Z) - Rule Based Learning with Dynamic (Graph) Neural Networks [0.8158530638728501]
We present rule based graph neural networks (RuleGNNs) that overcome some limitations of ordinary graph neural networks.
Our experiments show that the predictive performance of RuleGNNs is comparable to state-of-the-art graph classifiers.
We introduce new synthetic benchmark graph datasets to show how to integrate expert knowledge into RuleGNNs.
arXiv Detail & Related papers (2024-06-14T12:01:18Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Neuro-symbolic Rule Learning in Real-world Classification Tasks [75.0907310059298]
We extend pix2rule's neural DNF module to support rule learning in real-world multi-class and multi-label classification tasks.
We propose a novel extended model called neural DNF-EO (Exactly One) which enforces mutual exclusivity in multi-class classification.
arXiv Detail & Related papers (2023-03-29T13:27:14Z) - LEURN: Learning Explainable Univariate Rules with Neural Networks [0.0]
LEURN is a neural network architecture that learns univariate decision rules.
LEURN achieves comparable performance to state-of-the-art methods across 30 datasets for classification and regression problems.
arXiv Detail & Related papers (2023-03-27T06:34:42Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Neural-based classification rule learning for sequential data [0.0]
We propose a novel differentiable fully interpretable method to discover both local and global patterns for rule-based binary classification.
It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity.
We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset.
arXiv Detail & Related papers (2023-02-22T11:05:05Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Identification of Nonlinear Dynamic Systems Using Type-2 Fuzzy Neural
Networks -- A Novel Learning Algorithm and a Comparative Study [12.77304082363491]
A sliding mode theory-based learning algorithm has been proposed to tune both the premise and consequent parts of type-2 fuzzy neural networks.
The stability of the proposed learning algorithm has been proved by using an appropriate Lyapunov function.
Several comparisons have been realized and shown that the proposed algorithm has faster convergence speed than the existing methods.
arXiv Detail & Related papers (2021-04-04T23:44:59Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.