Recursive Binding for Similarity-Preserving Hypervector Representations
of Sequences
- URL: http://arxiv.org/abs/2201.11691v1
- Date: Thu, 27 Jan 2022 17:41:28 GMT
- Title: Recursive Binding for Similarity-Preserving Hypervector Representations
of Sequences
- Authors: Dmitri A. Rachkovskij, Denis Kleyko
- Abstract summary: A critical step for designing the HDC/VSA solutions is to obtain such representations from the input data.
Here, we propose their transformation to distributed representations that both preserve the similarity of identical sequence elements at nearby positions and are equivariant to the sequence shift.
The proposed transformation was experimentally investigated with symbolic strings used for modeling human perception of word similarity.
- Score: 4.65149292714414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperdimensional computing (HDC), also known as vector symbolic architectures
(VSA), is a computing framework used within artificial intelligence and
cognitive computing that operates with distributed vector representations of
large fixed dimensionality. A critical step for designing the HDC/VSA solutions
is to obtain such representations from the input data. Here, we focus on
sequences and propose their transformation to distributed representations that
both preserve the similarity of identical sequence elements at nearby positions
and are equivariant to the sequence shift. These properties are enabled by
forming representations of sequence positions using recursive binding and
superposition operations. The proposed transformation was experimentally
investigated with symbolic strings used for modeling human perception of word
similarity. The obtained results are on a par with more sophisticated
approaches from the literature. The proposed transformation was designed for
the HDC/VSA model known as Fourier Holographic Reduced Representations.
However, it can be adapted to some other HDC/VSA models.
Related papers
- EulerFormer: Sequential User Behavior Modeling with Complex Vector Attention [88.45459681677369]
We propose a novel transformer variant with complex vector attention, named EulerFormer.
It provides a unified theoretical framework to formulate both semantic difference and positional difference.
It is more robust to semantic variations and possesses moresuperior theoretical properties in principle.
arXiv Detail & Related papers (2024-03-26T14:18:43Z) - Neuromorphic Visual Scene Understanding with Resonator Networks [11.701553530610973]
We propose a neuromorphic solution exploiting three key concepts.
The framework is based on Vector Architectures with complex-valued vectors.
The network is factorized to factorize the non-commutative transforms translation and rotation in visual scenes.
A companion paper demonstrates the same approach in real-world application scenarios for machine vision and robotics.
arXiv Detail & Related papers (2022-08-26T22:17:52Z) - OneDConv: Generalized Convolution For Transform-Invariant Representation [76.15687106423859]
We propose a novel generalized one dimension convolutional operator (OneDConv)
It dynamically transforms the convolution kernels based on the input features in a computationally and parametrically efficient manner.
It improves the robustness and generalization of convolution without sacrificing the performance on common images.
arXiv Detail & Related papers (2022-01-15T07:44:44Z) - Shift-Equivariant Similarity-Preserving Hypervector Representations of
Sequences [0.8223798883838329]
We propose an approach for the formation of hypervectors of sequences.
Our methods represent the sequence elements by compositional hypervectors.
We experimentally explored the proposed representations using a diverse set of tasks with data in the form of symbolic strings.
arXiv Detail & Related papers (2021-12-31T14:29:12Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model [58.17021225930069]
We explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA)
We propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly.
Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works.
arXiv Detail & Related papers (2021-05-31T16:20:03Z) - Convolutional Hough Matching Networks [39.524998833064956]
We introduce a Hough transform perspective on convolutional matching and propose an effective geometric matching algorithm, dubbed Convolutional Hough Matching (CHM)
We cast it into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns non-rigid matching with a small number of interpretable parameters.
Our method sets a new state of the art on standard benchmarks for semantic visual correspondence, proving its strong robustness to challenging intra-class variations.
arXiv Detail & Related papers (2021-03-31T06:17:03Z) - Invariant Deep Compressible Covariance Pooling for Aerial Scene
Categorization [80.55951673479237]
We propose a novel invariant deep compressible covariance pooling (IDCCP) to solve nuisance variations in aerial scene categorization.
We conduct extensive experiments on the publicly released aerial scene image data sets and demonstrate the superiority of this method compared with state-of-the-art methods.
arXiv Detail & Related papers (2020-11-11T11:13:07Z) - tvGP-VAE: Tensor-variate Gaussian Process Prior Variational Autoencoder [0.0]
tvGP-VAE is able to explicitly model correlation via the use of kernel functions.
We show that the choice of which correlation structures to explicitly represent in the latent space has a significant impact on model performance.
arXiv Detail & Related papers (2020-06-08T17:59:13Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.