Discrete, compositional, and symbolic representations through attractor
dynamics
- URL: http://arxiv.org/abs/2310.01807v1
- Date: Tue, 3 Oct 2023 05:40:56 GMT
- Title: Discrete, compositional, and symbolic representations through attractor
dynamics
- Authors: Andrew Nam, Eric Elmoznino, Nikolay Malkin, Chen Sun, Yoshua Bengio,
Guillaume Lajoie
- Abstract summary: We show that imposing structure in the symbolic space can produce compositionality in the attractor-supported representation space of rich sensory inputs.
We argue that our model exhibits the process of an information bottleneck that is thought to play a role in conscious experience.
- Score: 61.58042831010077
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Compositionality is an important feature of discrete symbolic systems, such
as language and programs, as it enables them to have infinite capacity despite
a finite symbol set. It serves as a useful abstraction for reasoning in both
cognitive science and in AI, yet the interface between continuous and symbolic
processing is often imposed by fiat at the algorithmic level, such as by means
of quantization or a softmax sampling step. In this work, we explore how
discretization could be implemented in a more neurally plausible manner through
the modeling of attractor dynamics that partition the continuous representation
space into basins that correspond to sequences of symbols. Building on
established work in attractor networks and introducing novel training methods,
we show that imposing structure in the symbolic space can produce
compositionality in the attractor-supported representation space of rich
sensory inputs. Lastly, we argue that our model exhibits the process of an
information bottleneck that is thought to play a role in conscious experience,
decomposing the rich information of a sensory input into stable components
encoding symbolic information.
Related papers
- LARS-VSA: A Vector Symbolic Architecture For Learning with Abstract Rules [1.3049516752695616]
We propose a "relational bottleneck" that separates object-level features from abstract rules, allowing learning from limited amounts of data.
We adapt the "relational bottleneck" strategy to a high-dimensional space, incorporating explicit vector binding operations between symbols and relational representations.
Our system benefits from the low overhead of operations in hyperdimensional space, making it significantly more efficient than the state of the art when evaluated on a variety of test datasets.
arXiv Detail & Related papers (2024-05-23T11:05:42Z) - stl2vec: Semantic and Interpretable Vector Representation of Temporal Logic [0.5956301166481089]
We propose a semantically grounded vector representation (feature embedding) of logic formulae.
We compute continuous embeddings of formulae with several desirable properties.
We demonstrate the efficacy of the approach in two tasks: learning model checking and neurosymbolic framework.
arXiv Detail & Related papers (2024-05-23T10:04:56Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - On the Transition from Neural Representation to Symbolic Knowledge [2.2528422603742304]
We propose a Neural-Symbolic Transitional Dictionary Learning (TDL) framework that employs an EM algorithm to learn a transitional representation of data.
We implement the framework with a diffusion model by regarding the decomposition of input as a cooperative game.
We additionally use RL enabled by the Markovian of diffusion models to further tune the learned prototypes.
arXiv Detail & Related papers (2023-08-03T19:29:35Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - Deep Symbolic Learning: Discovering Symbols and Rules from Perceptions [69.40242990198]
Neuro-Symbolic (NeSy) integration combines symbolic reasoning with Neural Networks (NNs) for tasks requiring perception and reasoning.
Most NeSy systems rely on continuous relaxation of logical knowledge, and no discrete decisions are made within the model pipeline.
We propose a NeSy system that learns NeSy-functions, i.e., the composition of a (set of) perception functions which map continuous data to discrete symbols, and a symbolic function over the set of symbols.
arXiv Detail & Related papers (2022-08-24T14:06:55Z) - On Binding Objects to Symbols: Learning Physical Concepts to Understand
Real from Fake [155.6741526791004]
We revisit the classic signal-to-symbol barrier in light of the remarkable ability of deep neural networks to generate synthetic data.
We characterize physical objects as abstract concepts and use the previous analysis to show that physical objects can be encoded by finite architectures.
We conclude that binding physical entities to digital identities is possible in finite time with finite resources.
arXiv Detail & Related papers (2022-07-25T17:21:59Z) - Discrete and continuous representations and processing in deep learning:
Looking forward [18.28761409764605]
We argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence.
We suggest and discuss several avenues that could improve current neural networks with the inclusion of discrete elements to combine the advantages of both types of representations.
arXiv Detail & Related papers (2022-01-04T16:30:18Z) - Symbolic Learning and Reasoning with Noisy Data for Probabilistic
Anchoring [19.771392829416992]
We propose a semantic world modeling approach based on bottom-up object anchoring.
We extend the definitions of anchoring to handle multi-modal probability distributions.
We use statistical relational learning to enable the anchoring framework to learn symbolic knowledge.
arXiv Detail & Related papers (2020-02-24T16:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.