Logic interpretations of ANN partition cells
- URL: http://arxiv.org/abs/2408.14314v1
- Date: Mon, 26 Aug 2024 14:43:43 GMT
- Title: Logic interpretations of ANN partition cells
- Authors: Ingo Schmitt,
- Abstract summary: Consider a binary classification problem solved using a feed-forward artificial neural network (ANN)
Let the ANN be composed of a ReLU layer and several linear layers (convolution, sum-pooling, or fully connected)
We construct a bridge between a simple ANN and logic. As a result, we can analyze and manipulate the semantics of an ANN using the powerful tool set of logic.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Consider a binary classification problem solved using a feed-forward artificial neural network (ANN). Let the ANN be composed of a ReLU layer and several linear layers (convolution, sum-pooling, or fully connected). We assume the network was trained with high accuracy. Despite numerous suggested approaches, interpreting an artificial neural network remains challenging for humans. For a new method of interpretation, we construct a bridge between a simple ANN and logic. As a result, we can analyze and manipulate the semantics of an ANN using the powerful tool set of logic. To achieve this, we decompose the input space of the ANN into several network partition cells. Each network partition cell represents a linear combination that maps input values to a classifying output value. For interpreting the linear map of a partition cell using logic expressions, we suggest minterm values as the input of a simple ANN. We derive logic expressions representing interaction patterns for separating objects classified as 1 from those classified as 0. To facilitate an interpretation of logic expressions, we present them as binary logic trees.
Related papers
- Convolutional Differentiable Logic Gate Networks [68.74313756770123]
We propose an approach for learning logic gate networks directly via a differentiable relaxation.
We build on this idea, extending it by deep logic gate tree convolutions and logical OR pooling.
On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
arXiv Detail & Related papers (2024-11-07T14:12:00Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Transforming to Yoked Neural Networks to Improve ANN Structure [0.0]
Most existing artificial neural networks (ANN) are designed as a tree structure to imitate neural networks.
We propose a model YNN to efficiently eliminate such structural bias.
In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information.
arXiv Detail & Related papers (2023-06-03T16:56:18Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - A Logical Neural Network Structure With More Direct Mapping From Logical
Relations [8.239523696224975]
It is prerequisite of representing and storing logical relations rightly into computer systems so as to make automatic judgement and decision.
Current numeric ANN models are good at perceptual intelligence such as image recognition while they are not good at cognitive intelligence such as logical representation.
This paper proposes a novel logical ANN model by designing the new logical neurons and links in demand of logical representation.
arXiv Detail & Related papers (2021-06-22T00:53:08Z) - Rule Extraction from Binary Neural Networks with Convolutional Rules for
Model Validation [16.956140135868733]
We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN)
Our approach is based on rule extraction from binary neural networks with local search.
Our experiments show that the proposed approach is able to model the functionality of the neural network while at the same time producing interpretable logical rules.
arXiv Detail & Related papers (2020-12-15T17:55:53Z) - Learning Syllogism with Euler Neural-Networks [20.47827965932698]
The central vector of a ball is a vector that can inherit representation power of traditional neural network.
A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises.
In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism.
arXiv Detail & Related papers (2020-07-14T19:35:35Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Deep Networks as Logical Circuits: Generalization and Interpretation [10.223907995092835]
We present a hierarchical decomposition of the Deep Neural Networks (DNNs) discrete classification map into logical (AND/OR) combinations of intermediate (True/False) classifiers of the input.
We show that the learned, internal, logical computations correspond to semantically meaningful categories that allow DNN descriptions in plain English.
arXiv Detail & Related papers (2020-03-25T20:39:53Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.