Checking extracted rules in Neural Networks
- URL: http://arxiv.org/abs/2509.16547v1
- Date: Sat, 20 Sep 2025 06:15:47 GMT
- Title: Checking extracted rules in Neural Networks
- Authors: Adrian Wurm,
- Abstract summary: We investigate formal verification of extracted rules for Neural Networks.<n>A rule is a global property or a pattern concerning a large portion of the input space of a network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we investigate formal verification of extracted rules for Neural Networks under a complexity theoretic point of view. A rule is a global property or a pattern concerning a large portion of the input space of a network. These rules are algorithmically extracted from networks in an effort to better understand their inner way of working. Here, three problems will be in the focus: Does a given set of rules apply to a given network? Is a given set of rules consistent or do the rules contradict themselves? Is a given set of rules exhaustive in the sense that for every input the output is determined? Finding algorithms that extract such rules out of networks has been investigated over the last 30 years, however, to the author's current knowledge, no attempt in verification was made until now. A lot of attempts of extracting rules use heuristics involving randomness and over-approximation, so it might be beneficial to know whether knowledge obtained in that way can actually be trusted. We investigate the above questions for neural networks with ReLU-activation as well as for Boolean networks, each for several types of rules. We demonstrate how these problems can be reduced to each other and show that most of them are co-NP-complete.
Related papers
- FRRI: a novel algorithm for fuzzy-rough rule induction [0.8575004906002217]
We introduce a novel rule induction algorithm called Fuzzy Rough Rule Induction (FRRI)
We provide background and explain the workings of our algorithm.
We find that our algorithm is more accurate while creating small rulesets.
arXiv Detail & Related papers (2024-03-07T12:34:03Z) - Incorporating Expert Rules into Neural Networks in the Framework of
Concept-Based Learning [2.9370710299422598]
It is proposed how to combine logical rules and neural networks predicting the concept probabilities.
We provide several approaches for solving the stated problem and for training neural networks.
The code of proposed algorithms is publicly available.
arXiv Detail & Related papers (2024-02-22T17:33:49Z) - Abstracting Concept-Changing Rules for Solving Raven's Progressive
Matrix Problems [54.26307134687171]
Raven's Progressive Matrix (RPM) is a classic test to realize such ability in machine intelligence by selecting from candidates.
Recent studies suggest that solving RPM in an answer-generation way boosts a more in-depth understanding of rules.
We propose a deep latent variable model for Concept-changing Rule ABstraction (CRAB) by learning interpretable concepts and parsing concept-changing rules in the latent space.
arXiv Detail & Related papers (2023-07-15T07:16:38Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - NN2Rules: Extracting Rule List from Neural Networks [0.913755431537592]
NN2Rules is a decompositional approach to rule extraction, i.e., it extracts a set of decision rules from the parameters of the trained neural network model.
We show that the decision rules extracted have the same prediction as the neural network on any input presented to it, and hence the same accuracy.
arXiv Detail & Related papers (2022-07-04T09:19:47Z) - Full network nonlocality [68.8204255655161]
We introduce the concept of full network nonlocality, which describes correlations that necessitate all links in a network to distribute nonlocal resources.
We show that the most well-known network Bell test does not witness full network nonlocality.
More generally, we point out that established methods for analysing local and theory-independent correlations in networks can be combined in order to deduce sufficient conditions for full network nonlocality.
arXiv Detail & Related papers (2021-05-19T18:00:02Z) - Rule Extraction from Binary Neural Networks with Convolutional Rules for
Model Validation [16.956140135868733]
We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN)
Our approach is based on rule extraction from binary neural networks with local search.
Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.
arXiv Detail & Related papers (2020-12-15T17:55:53Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Layerwise Knowledge Extraction from Deep Convolutional Networks [0.9137554315375922]
We propose a novel layerwise knowledge extraction method using M-of-N rules.
We show that this approach produces rules close to an optimal complexity-error tradeoff.
We also find that the softmax layer in Convolutional Neural Networks and Autoencoders is highly explainable by rule extraction.
arXiv Detail & Related papers (2020-03-19T19:46:45Z) - Learn to Predict Sets Using Feed-Forward Neural Networks [63.91494644881925]
This paper addresses the task of set prediction using deep feed-forward neural networks.
We present a novel approach for learning to predict sets with unknown permutation and cardinality.
We demonstrate the validity of our set formulations on relevant vision problems.
arXiv Detail & Related papers (2020-01-30T01:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.