Tensor Products and Hyperdimensional Computing
- URL: http://arxiv.org/abs/2305.10572v2
- Date: Sat, 20 May 2023 23:01:30 GMT
- Title: Tensor Products and Hyperdimensional Computing
- Authors: Frank Qiu
- Abstract summary: We generalize and expand some results to the general setting of vector symbolic architectures (VSA) and hyperdimensional computing (HDC)
We establish the tensor product representation as the central representation, with a suite of unique properties.
These include it being the most general and expressive representation, as well as being the most compressed representation that has errorrless unbinding and detection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Following up on a previous analysis of graph embeddings, we generalize and
expand some results to the general setting of vector symbolic architectures
(VSA) and hyperdimensional computing (HDC). Importantly, we explore the
mathematical relationship between superposition, orthogonality, and tensor
product. We establish the tensor product representation as the central
representation, with a suite of unique properties. These include it being the
most general and expressive representation, as well as being the most
compressed representation that has errorrless unbinding and detection.
Related papers
- Point or Line? Using Line-based Representation for Panoptic Symbol Spotting in CAD Drawings [45.116136045440584]
We study the task of panoptic symbol spotting in computer-aided design (CAD) drawings composed of vector graphical primitives.<n>Existing methods typically rely on imageization, graph construction, or point-based representation.<n>We propose VecFormer, a novel method that addresses these challenges through line-based representation of primitives.
arXiv Detail & Related papers (2025-05-29T12:33:11Z) - The Origins of Representation Manifolds in Large Language Models [52.68554895844062]
We show that cosine similarity in representation space may encode the intrinsic geometry of a feature through shortest, on-manifold paths.<n>The critical assumptions and predictions of the theory are validated on text embeddings and token activations of large language models.
arXiv Detail & Related papers (2025-05-23T13:31:22Z) - Symbolic Disentangled Representations for Images [83.88591755871734]
We propose ArSyD (Architecture for Disentanglement), which represents each generative factor as a vector of the same dimension as the resulting representation.
We study ArSyD on the dSprites and CLEVR datasets and provide a comprehensive analysis of the learned symbolic disentangled representations.
arXiv Detail & Related papers (2024-12-25T09:20:13Z) - Graph-Dictionary Signal Model for Sparse Representations of Multivariate Data [49.77103348208835]
We define a novel Graph-Dictionary signal model, where a finite set of graphs characterizes relationships in data distribution through a weighted sum of their Laplacians.
We propose a framework to infer the graph dictionary representation from observed data, along with a bilinear generalization of the primal-dual splitting algorithm to solve the learning problem.
We exploit graph-dictionary representations in a motor imagery decoding task on brain activity data, where we classify imagined motion better than standard methods.
arXiv Detail & Related papers (2024-11-08T17:40:43Z) - On the Geometry and Optimization of Polynomial Convolutional Networks [2.9816332334719773]
We study convolutional neural networks with monomial activation functions.
We compute the dimension and the degree of the neuromanifold, which measure the expressivity of the model.
For a generic large dataset, we derive an explicit formula that quantifies the number of critical points arising in the optimization of a regression loss.
arXiv Detail & Related papers (2024-10-01T14:13:05Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Discovering Abstract Symbolic Relations by Learning Unitary Group Representations [7.303827428956944]
We investigate a principled approach for symbolic operation completion (SOC)
SOC poses a unique challenge in modeling abstract relationships between discrete symbols.
We demonstrate that SOC can be efficiently solved by a minimal model - a bilinear map - with a novel factorized architecture.
arXiv Detail & Related papers (2024-02-26T20:18:43Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Graph Embeddings via Tensor Products and Approximately Orthonormal Codes [0.0]
We show that our representation falls under the bind-and-sum approach in hyperdimensional computing.
We establish some precise results characterizing the behavior of our method.
We briefly discuss its applications toward a dynamic compressed representation of large sparse graphs.
arXiv Detail & Related papers (2022-08-18T10:56:37Z) - Recursive Binding for Similarity-Preserving Hypervector Representations
of Sequences [4.65149292714414]
A critical step for designing the HDC/VSA solutions is to obtain such representations from the input data.
Here, we propose their transformation to distributed representations that both preserve the similarity of identical sequence elements at nearby positions and are equivariant to the sequence shift.
The proposed transformation was experimentally investigated with symbolic strings used for modeling human perception of word similarity.
arXiv Detail & Related papers (2022-01-27T17:41:28Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Optimal radial basis for density-based atomic representations [58.720142291102135]
We discuss how to build an adaptive, optimal numerical basis that is chosen to represent most efficiently the structural diversity of the dataset at hand.
For each training dataset, this optimal basis is unique, and can be computed at no additional cost with respect to the primitive basis.
We demonstrate that this construction yields representations that are accurate and computationally efficient.
arXiv Detail & Related papers (2021-05-18T17:57:08Z) - The Immersion of Directed Multi-graphs in Embedding Fields.
Generalisations [0.0]
This paper outlines a generalised model for representing hybrid-categorical, symbolic, perceptual-sensory and perceptual-latent data.
This variety of representation is currently used by various machine-learning models in computer vision, NLP/NLU.
It is achieved by endowing a directed relational-Typed Multi-Graph with at least some edge attributes which represent the embeddings from various latent spaces.
arXiv Detail & Related papers (2020-04-28T09:28:08Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.