Shift-Equivariant Similarity-Preserving Hypervector Representations of
Sequences
- URL: http://arxiv.org/abs/2112.15475v1
- Date: Fri, 31 Dec 2021 14:29:12 GMT
- Title: Shift-Equivariant Similarity-Preserving Hypervector Representations of
Sequences
- Authors: Dmitri A. Rachkovskij
- Abstract summary: We propose an approach for the formation of hypervectors of sequences.
Our methods represent the sequence elements by compositional hypervectors.
We experimentally explored the proposed representations using a diverse set of tasks with data in the form of symbolic strings.
- Score: 0.8223798883838329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperdimensional Computing (HDC), also known as Vector-Symbolic Architectures
(VSA), is a promising framework for the development of cognitive architectures
and artificial intelligence systems, as well as for technical applications and
emerging neuromorphic and nanoscale hardware. HDC/VSA operate with
hypervectors, i.e., distributed vector representations of large fixed dimension
(usually > 1000). One of the key ingredients of HDC/VSA are the methods for
encoding data of various types (from numeric scalars and vectors to graphs)
into hypervectors. In this paper, we propose an approach for the formation of
hypervectors of sequences that provides both an equivariance with respect to
the shift of sequences and preserves the similarity of sequences with identical
elements at nearby positions. Our methods represent the sequence elements by
compositional hypervectors and exploit permutations of hypervectors for
representing the order of sequence elements. We experimentally explored the
proposed representations using a diverse set of tasks with data in the form of
symbolic strings. Although our approach is feature-free as it forms the
hypervector of a sequence from the hypervectors of its symbols at their
positions, it demonstrated the performance on a par with the methods that apply
various features, such as subsequences. The proposed techniques were designed
for the HDC/VSA model known as Sparse Binary Distributed Representations.
However, they can be adapted to hypervectors in formats of other HDC/VSA
models, as well as for representing sequences of types other than symbolic
strings.
Related papers
- Equivariant Graph Network Approximations of High-Degree Polynomials for Force Field Prediction [62.05532524197309]
equivariant deep models have shown promise in accurately predicting atomic potentials and force fields in molecular dynamics simulations.
In this work, we analyze the equivariant functions for equivariant architecture, and introduce a novel equivariant network, named PACE.
As experimented in commonly used benchmarks, PACE demonstrates state-of-the-art performance in predicting atomic energy and force fields.
arXiv Detail & Related papers (2024-11-06T19:34:40Z) - Sobol Sequence Optimization for Hardware-Efficient Vector Symbolic
Architectures [2.022279594514036]
Hyperdimensional computing (HDC) is an emerging computing paradigm with significant promise for efficient and robust learning.
objects are encoded with high-dimensional vector symbolic sequences called hypervectors.
The quality of hypervectors, defined by their distribution and independence, directly impacts the performance of HDC systems.
arXiv Detail & Related papers (2023-11-17T01:48:07Z) - uHD: Unary Processing for Lightweight and Dynamic Hyperdimensional
Computing [1.7118124088316602]
Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors.
In this paper, we show how to generate intensity and position hypervectors in HDC using low-discrepancy sequences.
For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data.
arXiv Detail & Related papers (2023-11-16T06:28:19Z) - Learning from Hypervectors: A Survey on Hypervector Encoding [9.46717806608802]
Hyperdimensional computing (HDC) is an emerging computing paradigm that imitates the brain's structure to offer a powerful and efficient processing and learning model.
In HDC, the data are encoded with long vectors, called hypervectors, typically with a length of 1K to 10K.
arXiv Detail & Related papers (2023-08-01T17:42:35Z) - Discrete Graph Auto-Encoder [52.50288418639075]
We introduce a new framework named Discrete Graph Auto-Encoder (DGAE)
We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations.
In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model.
arXiv Detail & Related papers (2023-06-13T12:40:39Z) - Deep Diversity-Enhanced Feature Representation of Hyperspectral Images [87.47202258194719]
We rectify 3D convolution by modifying its topology to enhance the rank upper-bound.
We also propose a novel diversity-aware regularization (DA-Reg) term that acts on the feature maps to maximize independence among elements.
To demonstrate the superiority of the proposed Re$3$-ConvSet and DA-Reg, we apply them to various HS image processing and analysis tasks.
arXiv Detail & Related papers (2023-01-15T16:19:18Z) - Understanding Hyperdimensional Computing for Parallel Single-Pass
Learning [47.82940409267635]
We show that HDC can outperform the state-of-the-art HDC model by up to 7.6% while maintaining hardware efficiency.
We propose a new class of VSAs, finite group VSAs, which surpass the limits of HDC.
Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6%.
arXiv Detail & Related papers (2022-02-10T02:38:56Z) - Recursive Binding for Similarity-Preserving Hypervector Representations
of Sequences [4.65149292714414]
A critical step for designing the HDC/VSA solutions is to obtain such representations from the input data.
Here, we propose their transformation to distributed representations that both preserve the similarity of identical sequence elements at nearby positions and are equivariant to the sequence shift.
The proposed transformation was experimentally investigated with symbolic strings used for modeling human perception of word similarity.
arXiv Detail & Related papers (2022-01-27T17:41:28Z) - Computing on Functions Using Randomized Vector Representations [4.066849397181077]
We call this new function encoding and computing framework Vector Function Architecture (VFA)
Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems.
arXiv Detail & Related papers (2021-09-08T04:39:48Z) - Tensor Representations for Action Recognition [54.710267354274194]
Human actions in sequences are characterized by the complex interplay between spatial features and their temporal dynamics.
We propose novel tensor representations for capturing higher-order relationships between visual features for the task of action recognition.
We use higher-order tensors and so-called Eigenvalue Power Normalization (NEP) which have been long speculated to perform spectral detection of higher-order occurrences.
arXiv Detail & Related papers (2020-12-28T17:27:18Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.