Learning Hierarchically-Structured Concepts II: Overlapping Concepts,
and Networks With Feedback
- URL: http://arxiv.org/abs/2304.09540v2
- Date: Tue, 11 Jul 2023 18:46:27 GMT
- Title: Learning Hierarchically-Structured Concepts II: Overlapping Concepts,
and Networks With Feedback
- Authors: Nancy Lynch and Frederik Mallmann-Trenn
- Abstract summary: In Lynch and Mallmann-Trenn (Neural Networks, 2021), we considered simple tree-structured concepts and feed-forward layered networks.
Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges.
We describe and analyze algorithms for recognition and algorithms for learning.
- Score: 4.847980206213334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We continue our study from Lynch and Mallmann-Trenn (Neural Networks, 2021),
of how concepts that have hierarchical structure might be represented in
brain-like neural networks, how these representations might be used to
recognize the concepts, and how these representations might be learned.
In Lynch and Mallmann-Trenn (Neural Networks, 2021), we considered simple
tree-structured concepts and feed-forward layered networks. Here we extend the
model in two ways: we allow limited overlap between children of different
concepts, and we allow networks to include feedback edges.
For these more general cases, we describe and analyze algorithms for
recognition and algorithms for learning.
Related papers
- Multi-Neuron Representations of Hierarchical Concepts in Spiking Neural Networks [0.0]
We describe how hierarchical concepts can be represented in three types of layered neural networks.
The aim is to support recognition of the concepts when partial information about the concepts is presented, and also when some neurons in the network might fail.
arXiv Detail & Related papers (2024-01-09T15:56:43Z) - Simple Mechanisms for Representing, Indexing and Manipulating Concepts [46.715152257557804]
We will argue that learning a concept could be done by looking at its moment statistics matrix to generate a concrete representation or signature of that concept.
When the concepts are intersected', signatures of the concepts can be used to find a common theme across a number of related intersected' concepts.
arXiv Detail & Related papers (2023-10-18T17:54:29Z) - Concept Decomposition for Visual Exploration and Inspiration [53.06983340652571]
We propose a method to decompose a visual concept into different visual aspects encoded in a hierarchical tree structure.
We utilize large vision-language models and their rich latent space for concept decomposition and generation.
arXiv Detail & Related papers (2023-05-29T16:56:56Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Formal Conceptual Views in Neural Networks [0.0]
We introduce two notions for conceptual views of a neural network, specifically a many-valued and a symbolic view.
We test the conceptual expressivity of our novel views through different experiments on the ImageNet and Fruit-360 data sets.
We demonstrate how conceptual views can be applied for abductive learning of human comprehensible rules from neurons.
arXiv Detail & Related papers (2022-09-27T16:38:24Z) - Learning with Capsules: A Survey [73.31150426300198]
Capsule networks were proposed as an alternative approach to Convolutional Neural Networks (CNNs) for learning object-centric representations.
Unlike CNNs, capsule networks are designed to explicitly model part-whole hierarchical relationships.
arXiv Detail & Related papers (2022-06-06T15:05:36Z) - Binary Multi Channel Morphological Neural Network [5.551756485554158]
We introduce a Binary Morphological Neural Network (BiMoNN) built upon the convolutional neural network.
We demonstrate an equivalence between BiMoNNs and morphological operators that we can use to binarize entire networks.
These can learn classical morphological operators and show promising results on a medical imaging application.
arXiv Detail & Related papers (2022-04-19T09:26:11Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - From Common Sense Reasoning to Neural Network Models through Multiple
Preferences: an overview [0.0]
We discuss the relationships between conditional and preferential logics and neural network models.
We propose a concept-wise multipreference semantics, recently introduced for defeasible description logics.
The paper describes the general approach, through the cases of Self-Organising Maps and Multilayer Perceptrons.
arXiv Detail & Related papers (2021-07-10T16:25:19Z) - Learning Hierarchically Structured Concepts [3.9795499448909024]
We show how a biologically plausible neural network can recognize hierarchically structured concepts.
For learning, we analyze Oja's rule formally, a well-known biologically-plausible rule for adjusting the weights of synapses.
We complement the learning results with lower bounds asserting that, in order to recognize concepts of a certain hierarchical depth, neural networks must have a corresponding number of layers.
arXiv Detail & Related papers (2019-09-10T15:11:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.