Analyzing the Capacity of Distributed Vector Representations to Encode
Spatial Information
- URL: http://arxiv.org/abs/2010.00055v1
- Date: Wed, 30 Sep 2020 18:49:29 GMT
- Title: Analyzing the Capacity of Distributed Vector Representations to Encode
Spatial Information
- Authors: Florian Mirus, Terrence C. Stewart, Jorg Conradt
- Abstract summary: We focus on simple superposition and more complex, structured representations involving convolutive powers to encode spatial information.
In two experiments, we find upper bounds for the number of concepts that can effectively be stored in a single vector.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vector Symbolic Architectures belong to a family of related cognitive
modeling approaches that encode symbols and structures in high-dimensional
vectors. Similar to human subjects, whose capacity to process and store
information or concepts in short-term memory is subject to numerical
restrictions,the capacity of information that can be encoded in such vector
representations is limited and one way of modeling the numerical restrictions
to cognition. In this paper, we analyze these limits regarding information
capacity of distributed representations. We focus our analysis on simple
superposition and more complex, structured representations involving
convolutive powers to encode spatial information. In two experiments, we find
upper bounds for the number of concepts that can effectively be stored in a
single vector.
Related papers
- Knowledge Composition using Task Vectors with Learned Anisotropic Scaling [51.4661186662329]
We introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level.
We show that such linear combinations explicitly exploit the low intrinsicity of pre-trained models, with only a few coefficients being the learnable parameters.
We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives.
arXiv Detail & Related papers (2024-07-03T07:54:08Z) - LARS-VSA: A Vector Symbolic Architecture For Learning with Abstract Rules [1.3049516752695616]
We propose a "relational bottleneck" that separates object-level features from abstract rules, allowing learning from limited amounts of data.
We adapt the "relational bottleneck" strategy to a high-dimensional space, incorporating explicit vector binding operations between symbols and relational representations.
Our system benefits from the low overhead of operations in hyperdimensional space, making it significantly more efficient than the state of the art when evaluated on a variety of test datasets.
arXiv Detail & Related papers (2024-05-23T11:05:42Z) - Labeling Neural Representations with Inverse Recognition [25.867702786273586]
Inverse Recognition (INVERT) is a scalable approach for connecting learned representations with human-understandable concepts.
In contrast to prior work, INVERT is capable of handling diverse types of neurons, exhibits less computational complexity, and does not rely on the availability of segmentation masks.
We demonstrate the applicability of INVERT in various scenarios, including the identification of representations affected by spurious correlations.
arXiv Detail & Related papers (2023-11-22T18:55:25Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Residual and Attentional Architectures for Vector-Symbols [0.0]
Vector-symbolic architectures (VSAs) provide methods for computing which are highly flexible and carry unique advantages.
In this work, we combine efficiency of the operations provided within the framework of the Fourier Holographic Reduced Representation (FHRR) VSA with the power of deep networks to construct novel VSA based residual and attention-based neural network architectures.
This demonstrates a novel application of VSAs and a potential path to implementing state-of-the-art neural models on neuromorphic hardware.
arXiv Detail & Related papers (2022-07-18T21:38:43Z) - Toward a Geometrical Understanding of Self-supervised Contrastive
Learning [55.83778629498769]
Self-supervised learning (SSL) is one of the premier techniques to create data representations that are actionable for transfer learning in the absence of human annotations.
Mainstream SSL techniques rely on a specific deep neural network architecture with two cascaded neural networks: the encoder and the projector.
In this paper, we investigate how the strength of the data augmentation policies affects the data embedding.
arXiv Detail & Related papers (2022-05-13T23:24:48Z) - Concept Activation Vectors for Generating User-Defined 3D Shapes [11.325580593182414]
We explore the interpretability of 3D geometric deep learning models in the context of Computer-Aided Design (CAD)
We use a deep learning architectures to encode high dimensional 3D shapes into a vectorized latent representation that can be used to describe arbitrary concepts.
arXiv Detail & Related papers (2022-04-29T13:09:18Z) - Adaptive Discrete Communication Bottlenecks with Dynamic Vector
Quantization [76.68866368409216]
We propose learning to dynamically select discretization tightness conditioned on inputs.
We show that dynamically varying tightness in communication bottlenecks can improve model performance on visual reasoning and reinforcement learning tasks.
arXiv Detail & Related papers (2022-02-02T23:54:26Z) - Attribute Selection using Contranominal Scales [0.09668407688201358]
Formal Concept Analysis (FCA) allows to analyze binary data by deriving concepts and ordering them in lattices.
The size of such a lattice depends on the number of subcontexts in the corresponding formal context.
We propose the algorithm ContraFinder that enables the computation of all contranominal scales of a given formal context.
arXiv Detail & Related papers (2021-06-21T10:53:50Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - A Theory of Usable Information Under Computational Constraints [103.5901638681034]
We propose a new framework for reasoning about information in complex systems.
Our foundation is based on a variational extension of Shannon's information theory.
We show that by incorporating computational constraints, $mathcalV$-information can be reliably estimated from data.
arXiv Detail & Related papers (2020-02-25T06:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.