Orthogonal Matrices for MBAT Vector Symbolic Architectures, and a "Soft"
VSA Representation for JSON
- URL: http://arxiv.org/abs/2202.04771v1
- Date: Tue, 8 Feb 2022 18:41:32 GMT
- Title: Orthogonal Matrices for MBAT Vector Symbolic Architectures, and a "Soft"
VSA Representation for JSON
- Authors: Stephen I. Gallant
- Abstract summary: Vector Architectures (VSAs) give a way to represent a complex object as a single fixed-length vector, so that similar objects have similar vector representations.
We review a previously proposed VSA method, MBAT (Matrix Binding of Additive Terms), which uses by random matrices for binding related terms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vector Symbolic Architectures (VSAs) give a way to represent a complex object
as a single fixed-length vector, so that similar objects have similar vector
representations. These vector representations then become easy to use for
machine learning or nearest-neighbor search. We review a previously proposed
VSA method, MBAT (Matrix Binding of Additive Terms), which uses multiplication
by random matrices for binding related terms. However, multiplying by such
matrices introduces instabilities which can harm performance. Making the random
matrices be orthogonal matrices provably fixes this problem. With respect to
larger scale applications, we see how to apply MBAT vector representations for
any data expressed in JSON. JSON is used in numerous programming languages to
express complex data, but its native format appears highly unsuited for machine
learning. Expressing JSON as a fixed-length vector makes it readily usable for
machine learning and nearest-neighbor search. Creating such JSON vectors also
shows that a VSA needs to employ binding operations that are non-commutative.
VSAs are now ready to try with full-scale practical applications, including
healthcare, pharmaceuticals, and genomics.
Keywords: MBAT (Matrix Binding of Additive Terms), VSA (Vector Symbolic
Architecture), HDC (Hyperdimensional Computing), Distributed Representations,
Binding, Orthogonal Matrices, Recurrent Connections, Machine Learning, Search,
JSON, VSA Applications
Related papers
- Linearithmic Clean-up for Vector-Symbolic Key-Value Memory with Kroneker Rotation Products [4.502446902578007]
A computational bottleneck in current Vector-Symbolic Architectures is the clean-up'' step.<n>We present a new codebook representation that supports efficient clean-up.<n>The resulting clean-up time complexity is linearithmic, i.e. $mathcalO(N,textlog,N)$.
arXiv Detail & Related papers (2025-06-18T18:23:28Z) - A Walsh Hadamard Derived Linear Vector Symbolic Architecture [83.27945465029167]
Symbolic Vector Architectures (VSAs) are an approach to developing Neuro-symbolic AI.
HLB is designed to have favorable computational efficiency, and efficacy in classic VSA tasks.
arXiv Detail & Related papers (2024-10-30T03:42:59Z) - Support matrix machine: A review [0.0]
Support matrix machine (SMM) represents one of the emerging methodologies tailored for handling matrix input data.
This article provides the first in-depth analysis of the development of the SMM model.
We discuss numerous SMM variants, such as robust, sparse, class imbalance, and multi-class classification models.
arXiv Detail & Related papers (2023-10-30T16:46:23Z) - Multiresolution kernel matrix algebra [0.0]
We show the compression of kernel matrices by means of samplets produces optimally sparse matrices in a certain S-format.
The inverse of a kernel matrix (if it exists) is compressible in the S-format as well.
The matrix algebra is justified mathematically by pseudo differential calculus.
arXiv Detail & Related papers (2022-11-21T17:50:22Z) - HyperSeed: Unsupervised Learning with Vector Symbolic Architectures [5.258404928739212]
This paper presents a novel unsupervised machine learning approach named Hyperseed.
It leverages Vector Symbolic Architectures (VSA) for fast learning a topology preserving feature map of unlabelled data.
The two distinctive novelties of the Hyperseed algorithm are 1) Learning from only few input data samples and 2) A learning rule based on a single vector operation.
arXiv Detail & Related papers (2021-10-15T20:05:43Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Learning Sparse Graph Laplacian with K Eigenvector Prior via Iterative
GLASSO and Projection [58.5350491065936]
We consider a structural assumption on the graph Laplacian matrix $L$.
The first $K$ eigenvectors of $L$ are pre-selected, e.g., based on domain-specific criteria.
We design an efficient hybrid graphical lasso/projection algorithm to compute the most suitable graph Laplacian matrix $L* in H_u+$ given $barC$.
arXiv Detail & Related papers (2020-10-25T18:12:50Z) - What if Neural Networks had SVDs? [66.91160214071088]
Various Neural Networks employ time-consuming matrix operations like matrix inversion.
We present an algorithm that is fast enough to speed up several matrix operations.
arXiv Detail & Related papers (2020-09-29T12:58:52Z) - Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and
Graph Problems [58.83118651518438]
We consider the general problem of learning about a matrix through vector-matrix-vector queries.
These queries provide the value of $boldsymbolumathrmTboldsymbolMboldsymbolv$ over a fixed field.
We provide new upper and lower bounds for a wide variety of problems, spanning linear algebra, statistics, and graphs.
arXiv Detail & Related papers (2020-06-24T19:33:49Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.