Bootstrapping Concept Formation in Small Neural Networks
- URL: http://arxiv.org/abs/2110.13665v1
- Date: Tue, 26 Oct 2021 12:58:27 GMT
- Title: Bootstrapping Concept Formation in Small Neural Networks
- Authors: Minija Tamosiunaite, Tomas Kulvicius, and Florentin W\"org\"otter
- Abstract summary: We argue that, first, Concepts are formed as closed representations, which are then consolidated by relating them to each other.
We present a model system (agent) with a small neural network that uses realistic learning rules and receives only feedback from the environment in which the agent performs virtual actions.
- Score: 2.580765958706854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The question how neural systems (of humans) can perform reasoning is still
far from being solved. We posit that the process of forming Concepts is a
fundamental step required for this. We argue that, first, Concepts are formed
as closed representations, which are then consolidated by relating them to each
other. Here we present a model system (agent) with a small neural network that
uses realistic learning rules and receives only feedback from the environment
in which the agent performs virtual actions. First, the actions of the agent
are reflexive. In the process of learning, statistical regularities in the
input lead to the formation of neuronal pools representing relations between
the entities observed by the agent from its artificial world. This information
then influences the behavior of the agent via feedback connections replacing
the initial reflex by an action driven by these relational representations. We
hypothesize that the neuronal pools representing relational information can be
considered as primordial Concepts, which may in a similar way be present in
some pre-linguistic animals, too. We argue that systems such as this can help
formalizing the discussion about what constitutes Concepts and serve as a
starting point for constructing artificial cogitating systems.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks [15.837316393474403]
Concepts can act as a natural link between learning and reasoning.
Knowledge can not only be extracted from neural networks but concept knowledge can also be inserted into neural network architectures.
arXiv Detail & Related papers (2023-10-18T11:08:02Z) - Interpretability is in the Mind of the Beholder: A Causal Framework for
Human-interpretable Representation Learning [22.201878275784246]
Focus in Explainable AI is shifting from explanations defined in terms of low-level elements, such as input features, to explanations encoded in terms of interpretable concepts learned from data.
How to reliably acquire such concepts is, however, still fundamentally unclear.
We propose a mathematical framework for acquiring interpretable representations suitable for both post-hoc explainers and concept-based neural networks.
arXiv Detail & Related papers (2023-09-14T14:26:20Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design [10.421465303670638]
This document introduces a hypothetical framework for the functional nature of primitive neural networks.
It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world.
It achieves this without participating in an algorithmic structure.
arXiv Detail & Related papers (2021-05-21T05:59:27Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously [8.477619837043214]
Critics worry that neural network models fail to illuminate brain function.
We argue that certain kinds of neural network models are actually good examples of mechanistic models.
arXiv Detail & Related papers (2021-04-03T22:17:40Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - A Concept-Value Network as a Brain Model [0.0]
This paper suggests a statistical framework for describing the relations between the physical and conceptual entities of a brain-like model.
The paper suggests that features may be the electrical wiring, although chemical connections are also possible.
arXiv Detail & Related papers (2019-04-09T10:30:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.