ERIC: Extracting Relations Inferred from Convolutions
- URL: http://arxiv.org/abs/2010.09452v1
- Date: Mon, 19 Oct 2020 13:04:21 GMT
- Title: ERIC: Extracting Relations Inferred from Convolutions
- Authors: Joe Townsend, Theodoros Kasioumis and Hiroya Inakoshi
- Abstract summary: We show that the behaviour of kernels across multiple layers of a convolutional neural network can be approximated using a logic program.
We also show that an extracted program can be used as a framework for further understanding the behaviour of CNNs.
- Score: 1.878433493707693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our main contribution is to show that the behaviour of kernels across
multiple layers of a convolutional neural network can be approximated using a
logic program. The extracted logic programs yield accuracies that correlate
with those of the original model, though with some information loss in
particular as approximations of multiple layers are chained together or as
lower layers are quantised. We also show that an extracted program can be used
as a framework for further understanding the behaviour of CNNs. Specifically,
it can be used to identify key kernels worthy of deeper inspection and also
identify relationships with other kernels in the form of the logical rules.
Finally, we make a preliminary, qualitative assessment of rules we extract from
the last convolutional layer and show that kernels identified are symbolic in
that they react strongly to sets of similar images that effectively divide
output classes into sub-classes with distinct characteristics.
Related papers
- GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Deciphering 'What' and 'Where' Visual Pathways from Spectral Clustering of Layer-Distributed Neural Representations [15.59251297818324]
We present an approach for analyzing grouping information contained within a neural network's activations.
We exploit features from all layers and obviating the need to guess which part of the model contains relevant information.
arXiv Detail & Related papers (2023-12-11T01:20:34Z) - Using Logic Programming and Kernel-Grouping for Improving
Interpretability of Convolutional Neural Networks [1.6317061277457001]
We present a neurosymbolic framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer kernels of the CNN.
We show that grouping similar kernels leads to a significant reduction in the size of the rule-set generated by FOLD-SE-M.
We also propose a novel algorithm for labeling each predicate in the rule-set with the semantic concept(s) that its corresponding kernel group represents.
arXiv Detail & Related papers (2023-10-19T18:12:49Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Verification of Neural Network Control Systems using Symbolic Zonotopes
and Polynotopes [1.0312968200748116]
Verification and safety assessment of neural network controlled systems (NNCSs) is an emerging challenge.
To provide guarantees, verification tools must efficiently capture the interplay between the neural network and the physical system within the control loop.
A compositional approach focused on preserving long term symbolic dependency is proposed for the analysis of NNCSs.
arXiv Detail & Related papers (2023-06-26T11:52:14Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Inference Graphs for CNN Interpretation [12.765543440576144]
Convolutional neural networks (CNNs) have achieved superior accuracy in many visual related tasks.
We propose to model the network hidden layers activity using probabilistic models.
We show that such graphs are useful for understanding the general inference process of a class, as well as explaining decisions the network makes regarding specific images.
arXiv Detail & Related papers (2021-10-20T13:56:09Z) - Generalized Leverage Score Sampling for Neural Networks [82.95180314408205]
Leverage score sampling is a powerful technique that originates from theoretical computer science.
In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels.
arXiv Detail & Related papers (2020-09-21T14:46:01Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.