NN2Rules: Extracting Rule List from Neural Networks
- URL: http://arxiv.org/abs/2207.12271v1
- Date: Mon, 4 Jul 2022 09:19:47 GMT
- Title: NN2Rules: Extracting Rule List from Neural Networks
- Authors: G Roshan Lal and Varun Mithal
- Abstract summary: NN2Rules is a decompositional approach to rule extraction, i.e., it extracts a set of decision rules from the parameters of the trained neural network model.
We show that the decision rules extracted have the same prediction as the neural network on any input presented to it, and hence the same accuracy.
- Score: 0.913755431537592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an algorithm, NN2Rules, to convert a trained neural network into a
rule list. Rule lists are more interpretable since they align better with the
way humans make decisions. NN2Rules is a decompositional approach to rule
extraction, i.e., it extracts a set of decision rules from the parameters of
the trained neural network model. We show that the decision rules extracted
have the same prediction as the neural network on any input presented to it,
and hence the same accuracy. A key contribution of NN2Rules is that it allows
hidden neuron behavior to be either soft-binary (eg. sigmoid activation) or
rectified linear (ReLU) as opposed to existing decompositional approaches that
were developed with the assumption of soft-binary activation.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Nearest Neighbor Representations of Neural Circuits [12.221087476416056]
Nearest Neighbor (NN) representations is a novel approach of computation.
We provide explicit constructions for their NN representation with an explicit bound on the number of bits.
Example functions include NN representations of convex polytopes (AND of threshold gates), IP2, OR of threshold gates, and linear or exact decision lists.
arXiv Detail & Related papers (2024-02-13T19:38:01Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Learning to Reverse DNNs from AI Programs Automatically [8.414732322675093]
We propose NNReverse, the first learning-based method which can reverse DNNs from AI programs without domain knowledge.
To represent assembly instructions semantics precisely, NNReverse proposes a more fine-grained embedding model.
arXiv Detail & Related papers (2022-05-20T04:17:19Z) - Efficient Decompositional Rule Extraction for Deep Neural Networks [5.69361786082969]
ECLAIRE is a novel-time rule extraction algorithm capable of scaling to both large DNN architectures and large training datasets.
We show that ECLAIRE consistently extracts more accurate and comprehensible rule sets than the current state-of-the-art methods.
arXiv Detail & Related papers (2021-11-24T16:54:10Z) - LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking [62.634516517844496]
We propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules with the performance of neural learning.
Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches.
arXiv Detail & Related papers (2021-06-17T20:22:45Z) - Learning Accurate and Interpretable Decision Rule Sets from Neural
Networks [5.280792199222362]
This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification.
We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layer architecture.
arXiv Detail & Related papers (2021-03-04T04:10:19Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Rule Extraction from Binary Neural Networks with Convolutional Rules for
Model Validation [16.956140135868733]
We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN)
Our approach is based on rule extraction from binary neural networks with local search.
Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.
arXiv Detail & Related papers (2020-12-15T17:55:53Z) - Training Binary Neural Networks through Learning with Noisy Supervision [76.26677550127656]
This paper formalizes the binarization operations over neural networks from a learning perspective.
Experimental results on benchmark datasets indicate that the proposed binarization technique attains consistent improvements over baselines.
arXiv Detail & Related papers (2020-10-10T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.