Learning Hierarchically Structured Concepts
- URL: http://arxiv.org/abs/1909.04559v6
- Date: Tue, 27 Feb 2024 13:25:45 GMT
- Title: Learning Hierarchically Structured Concepts
- Authors: Nancy Lynch and Frederik Mallmann-Trenn
- Abstract summary: We show how a biologically plausible neural network can recognize hierarchically structured concepts.
For learning, we analyze Oja's rule formally, a well-known biologically-plausible rule for adjusting the weights of synapses.
We complement the learning results with lower bounds asserting that, in order to recognize concepts of a certain hierarchical depth, neural networks must have a corresponding number of layers.
- Score: 3.9795499448909024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the question of how concepts that have structure get represented in
the brain. Specifically, we introduce a model for hierarchically structured
concepts and we show how a biologically plausible neural network can recognize
these concepts, and how it can learn them in the first place. Our main goal is
to introduce a general framework for these tasks and prove formally how both
(recognition and learning) can be achieved.
We show that both tasks can be accomplished even in presence of noise. For
learning, we analyze Oja's rule formally, a well-known biologically-plausible
rule for adjusting the weights of synapses. We complement the learning results
with lower bounds asserting that, in order to recognize concepts of a certain
hierarchical depth, neural networks must have a corresponding number of layers.
Related papers
- Fundamental Components of Deep Learning: A category-theoretic approach [0.0]
This thesis develops a novel mathematical foundation for deep learning based on the language of category theory.
We also systematise many existing approaches, placing many existing constructions and concepts under the same umbrella.
arXiv Detail & Related papers (2024-03-13T01:29:40Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks [15.837316393474403]
Concepts can act as a natural link between learning and reasoning.
Knowledge can not only be extracted from neural networks but concept knowledge can also be inserted into neural network architectures.
arXiv Detail & Related papers (2023-10-18T11:08:02Z) - Learning Hierarchically-Structured Concepts II: Overlapping Concepts,
and Networks With Feedback [4.847980206213334]
In Lynch and Mallmann-Trenn (Neural Networks, 2021), we considered simple tree-structured concepts and feed-forward layered networks.
Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges.
We describe and analyze algorithms for recognition and algorithms for learning.
arXiv Detail & Related papers (2023-04-19T10:11:29Z) - Neural Network based Successor Representations of Space and Language [6.748976209131109]
We present a neural network based approach to learn multi-scale successor representations of structured knowledge.
In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations.
We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
arXiv Detail & Related papers (2022-02-22T21:52:46Z) - The brain as a probabilistic transducer: an evolutionarily plausible
network architecture for knowledge representation, computation, and behavior [14.505867475659274]
We offer a general theoretical framework for brain and behavior that is evolutionarily and computationally plausible.
The brain in our abstract model is a network of nodes and edges. Both nodes and edges in our network have weights and activation levels.
By specifying the innate (genetic) components of the network, we show how evolution could endow the network with initial adaptive rules and goals that are then enriched through learning.
arXiv Detail & Related papers (2021-12-26T14:37:47Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.