Variable Binding for Sparse Distributed Representations: Theory and
Applications
- URL: http://arxiv.org/abs/2009.06734v1
- Date: Mon, 14 Sep 2020 20:40:09 GMT
- Title: Variable Binding for Sparse Distributed Representations: Theory and
Applications
- Authors: E. Paxon Frady, Denis Kleyko, Friedrich T. Sommer
- Abstract summary: Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap.
VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population.
We show that variable binding between dense vectors in VSAs is mathematically equivalent to tensor product binding between sparse vectors, an operation which increases dimensionality.
- Score: 4.150085009901543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Symbolic reasoning and neural networks are often considered incompatible
approaches. Connectionist models known as Vector Symbolic Architectures (VSAs)
can potentially bridge this gap. However, classical VSAs and neural networks
are still considered incompatible. VSAs encode symbols by dense pseudo-random
vectors, where information is distributed throughout the entire neuron
population. Neural networks encode features locally, often forming sparse
vectors of neural activation. Following Rachkovskij (2001); Laiho et al.
(2015), we explore symbolic reasoning with sparse distributed representations.
The core operations in VSAs are dyadic operations between vectors to express
variable binding and the representation of sets. Thus, algebraic manipulations
enable VSAs to represent and process data structures in a vector space of fixed
dimensionality. Using techniques from compressed sensing, we first show that
variable binding between dense vectors in VSAs is mathematically equivalent to
tensor product binding between sparse vectors, an operation which increases
dimensionality. This result implies that dimensionality-preserving binding for
general sparse vectors must include a reduction of the tensor matrix into a
single sparse vector. Two options for sparsity-preserving variable binding are
investigated. One binding method for general sparse vectors extends earlier
proposals to reduce the tensor product into a vector, such as circular
convolution. The other method is only defined for sparse block-codes,
block-wise circular convolution. Our experiments reveal that variable binding
for block-codes has ideal properties, whereas binding for general sparse
vectors also works, but is lossy, similar to previous proposals. We demonstrate
a VSA with sparse block-codes in example applications, cognitive reasoning and
classification, and discuss its relevance for neuroscience and neural networks.
Related papers
- A Walsh Hadamard Derived Linear Vector Symbolic Architecture [83.27945465029167]
Symbolic Vector Architectures (VSAs) are an approach to developing Neuro-symbolic AI.
HLB is designed to have favorable computational efficiency, and efficacy in classic VSA tasks.
arXiv Detail & Related papers (2024-10-30T03:42:59Z) - An Intrinsic Vector Heat Network [64.55434397799728]
This paper introduces a novel neural network architecture for learning tangent vector fields embedded in 3D.
We introduce a trainable vector heat diffusion module to spatially propagate vector-valued feature data across the surface.
We also demonstrate the effectiveness of our method on the useful industrial application of quadrilateral mesh generation.
arXiv Detail & Related papers (2024-06-14T00:40:31Z) - Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors [3.9271338080639753]
We propose Binder, a novel approach for order-based representation.
Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods.
arXiv Detail & Related papers (2024-04-16T21:52:55Z) - Factorizers for Distributed Sparse Block Codes [45.29870215671697]
We propose a fast and highly accurate method for factorizing distributed block codes (SBCs)
Our iterative factorizer introduces a threshold-based nonlinear activation, conditional random sampling, and an $ell_infty$-based similarity metric.
We demonstrate the feasibility of our method on four deep CNN architectures over CIFAR-100, ImageNet-1K, and RAVEN datasets.
arXiv Detail & Related papers (2023-03-24T12:31:48Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Pseudo-Euclidean Attract-Repel Embeddings for Undirected Graphs [73.0261182389643]
Dot product embeddings take a graph and construct vectors for nodes such that dot products between two vectors give the strength of the edge.
We remove the transitivity assumption by embedding nodes into a pseudo-Euclidean space.
Pseudo-Euclidean embeddings can compress networks efficiently, allow for multiple notions of nearest neighbors each with their own interpretation, and can be slotted' into existing models.
arXiv Detail & Related papers (2021-06-17T17:23:56Z) - Vector Neurons: A General Framework for SO(3)-Equivariant Networks [32.81671803104126]
In this paper, we introduce a general framework built on top of what we call Vector Neuron representations.
Our vector neurons enable a simple mapping of SO(3) actions to latent spaces.
We also show for the first time a rotation equivariant reconstruction network.
arXiv Detail & Related papers (2021-04-25T18:48:15Z) - Positional Artefacts Propagate Through Masked Language Model Embeddings [16.97378491957158]
We find cases of persistent outlier neurons within BERT and RoBERTa's hidden state vectors.
We pre-train the RoBERTa-base models from scratch and find that the outliers disappear without using positional embeddings.
arXiv Detail & Related papers (2020-11-09T12:49:39Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.