The Hyperdimensional Transform: a Holographic Representation of
Functions
- URL: http://arxiv.org/abs/2310.16065v1
- Date: Tue, 24 Oct 2023 11:33:39 GMT
- Title: The Hyperdimensional Transform: a Holographic Representation of
Functions
- Authors: Pieter Dewulf, Michiel Stock, Bernard De Baets
- Abstract summary: We introduce the hyperdimensional transform as a new kind of integral transform.
It converts square-integrable functions into noise-robust, holographic, high-dimensional representations called hyperdimensional vectors.
It provides theoretical foundations and new insights for the field of hyperdimensional computing.
- Score: 12.693238093510072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integral transforms are invaluable mathematical tools to map functions into
spaces where they are easier to characterize. We introduce the hyperdimensional
transform as a new kind of integral transform. It converts square-integrable
functions into noise-robust, holographic, high-dimensional representations
called hyperdimensional vectors. The central idea is to approximate a function
by a linear combination of random functions. We formally introduce a set of
stochastic, orthogonal basis functions and define the hyperdimensional
transform and its inverse. We discuss general transform-related properties such
as its uniqueness, approximation properties of the inverse transform, and the
representation of integrals and derivatives. The hyperdimensional transform
offers a powerful, flexible framework that connects closely with other integral
transforms, such as the Fourier, Laplace, and fuzzy transforms. Moreover, it
provides theoretical foundations and new insights for the field of
hyperdimensional computing, a computing paradigm that is rapidly gaining
attention for efficient and explainable machine learning algorithms, with
potential applications in statistical modelling and machine learning. In
addition, we provide straightforward and easily understandable code, which can
function as a tutorial and allows for the reproduction of the demonstrated
examples, from computing the transform to solving differential equations.
Related papers
- EulerFormer: Sequential User Behavior Modeling with Complex Vector Attention [88.45459681677369]
We propose a novel transformer variant with complex vector attention, named EulerFormer.
It provides a unified theoretical framework to formulate both semantic difference and positional difference.
It is more robust to semantic variations and possesses moresuperior theoretical properties in principle.
arXiv Detail & Related papers (2024-03-26T14:18:43Z) - Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt
Tensor Products [16.84090726181652]
We propose a systematic approach to accelerate the complexity of the tensor products of irreps.
We introduce the Gaunt Product, which serves as a new method to construct efficient equivariant operations.
Our experiments on the Open Catalyst Project and 3BPA datasets demonstrate both the increased efficiency and improved performance.
arXiv Detail & Related papers (2024-01-18T18:57:10Z) - The Hyperdimensional Transform for Distributional Modelling, Regression
and Classification [12.693238093510072]
We present the power of the hyperdimensional transform to a broad data science audience.
We show how existing algorithms can be modified and how this transform can lead to a novel, well-founded toolbox.
arXiv Detail & Related papers (2023-11-14T13:26:49Z) - Approximation and Estimation Ability of Transformers for
Sequence-to-Sequence Functions with Infinite Dimensional Input [50.83356836818667]
We study the approximation and estimation ability of Transformers as sequence-to-sequence functions with infinite dimensional inputs.
Our theoretical results support the practical success of Transformers for high dimensional data.
arXiv Detail & Related papers (2023-05-30T02:44:49Z) - Integral Transforms in a Physics-Informed (Quantum) Neural Network
setting: Applications & Use-Cases [1.7403133838762446]
In many computational problems in engineering and science, function or model differentiation is essential, but also integration is needed.
In this work, we propose to augment the paradigm of Physics-Informed Neural Networks with automatic integration.
arXiv Detail & Related papers (2022-06-28T17:51:32Z) - Orthonormal Convolutions for the Rotation Based Iterative
Gaussianization [64.44661342486434]
This paper elaborates an extension of rotation-based iterative Gaussianization, RBIG, which makes image Gaussianization possible.
In images its application has been restricted to small image patches or isolated pixels, because rotation in RBIG is based on principal or independent component analysis.
We present the emphConvolutional RBIG: an extension that alleviates this issue by imposing that the rotation in RBIG is a convolution.
arXiv Detail & Related papers (2022-06-08T12:56:34Z) - 3D Equivariant Graph Implicit Functions [51.5559264447605]
We introduce a novel family of graph implicit functions with equivariant layers that facilitates modeling fine local details.
Our method improves over the existing rotation-equivariant implicit function from 0.69 to 0.89 on the ShapeNet reconstruction task.
arXiv Detail & Related papers (2022-03-31T16:51:25Z) - Similarity Equivariant Linear Transformation of Joint Orientation-Scale
Space Representations [11.57423546614283]
Group convolution generalizes the concept to linear operations.
Group convolution that is equivariant to similarity transformation is the most general shape preserving linear operator.
We present an initial demonstration of its utility by using it to compute a shape equivariant distribution of closed contours traced by particles undergoing Brownian motion in velocity.
arXiv Detail & Related papers (2022-03-13T23:53:51Z) - Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases [73.53227696624306]
We present a new family of algorithms for learning Fourier-sparse set functions.
In contrast to other work that focused on the Walsh-Hadamard transform, our novel algorithms operate with recently introduced non-orthogonal Fourier transforms.
We demonstrate effectiveness on several real-world applications.
arXiv Detail & Related papers (2020-10-01T14:31:59Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.