From Common Sense Reasoning to Neural Network Models through Multiple
Preferences: an overview
- URL: http://arxiv.org/abs/2107.04870v1
- Date: Sat, 10 Jul 2021 16:25:19 GMT
- Title: From Common Sense Reasoning to Neural Network Models through Multiple
Preferences: an overview
- Authors: Laura Giordano, Valentina Gliozzi, Daniele Theseider Dupr\'e
- Abstract summary: We discuss the relationships between conditional and preferential logics and neural network models.
We propose a concept-wise multipreference semantics, recently introduced for defeasible description logics.
The paper describes the general approach, through the cases of Self-Organising Maps and Multilayer Perceptrons.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we discuss the relationships between conditional and
preferential logics and neural network models, based on a multi-preferential
semantics. We propose a concept-wise multipreference semantics, recently
introduced for defeasible description logics to take into account preferences
with respect to different concepts, as a tool for providing a semantic
interpretation to neural network models. This approach has been explored both
for unsupervised neural network models (Self-Organising Maps) and for
supervised ones (Multilayer Perceptrons), and we expect that the same approach
might be extended to other neural network models. It allows for logical
properties of the network to be checked (by model checking) over an
interpretation capturing the input-output behavior of the network. For
Multilayer Perceptrons, the deep network itself can be regarded as a
conditional knowledge base, in which synaptic connections correspond to
weighted conditionals. The paper describes the general approach, through the
cases of Self-Organising Maps and Multilayer Perceptrons, and discusses some
open issues and perspectives.
Related papers
- This Probably Looks Exactly Like That: An Invertible Prototypical Network [8.957872207471311]
Prototypical neural networks represent an exciting way forward in realizing human-comprehensible machine learning without concept annotations.
We find that reliance on indirect interpretation functions for prototypical explanations imposes a severe limit on prototypes' informative power.
We propose one such model, called ProtoFlow, by composing a normalizing flow with Gaussian mixture models.
arXiv Detail & Related papers (2024-07-16T21:51:02Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Understanding polysemanticity in neural networks through coding theory [0.8702432681310401]
We propose a novel practical approach to network interpretability and theoretical insights into polysemanticity and the density of codes.
We show how random projections can reveal whether a network exhibits a smooth or non-differentiable code and hence how interpretable the code is.
Our approach advances the pursuit of interpretability in neural networks, providing insights into their underlying structure and suggesting new avenues for circuit-level interpretability.
arXiv Detail & Related papers (2024-01-31T16:31:54Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Interpretable part-whole hierarchies and conceptual-semantic
relationships in neural networks [4.153804257347222]
We present Agglomerator, a framework capable of providing a representation of part-whole hierarchies from visual cues.
We evaluate our method on common datasets, such as SmallNORB, MNIST, FashionMNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-03-07T10:56:13Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - NFT-K: Non-Fungible Tangent Kernels [23.93508901712177]
We develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually.
We demonstrate the interpretability of this model on two datasets, showing that the multiple kernels model elucidates the interplay between the layers and predictions.
arXiv Detail & Related papers (2021-10-11T00:35:47Z) - Weighted defeasible knowledge bases and a multipreference semantics for
a deep neural network model [0.0]
We investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a deep neural network model.
Weighted knowledge bases for description logics are considered under a "concept-wise" multipreference semantics.
arXiv Detail & Related papers (2020-12-24T19:04:51Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.