Computing with Residue Numbers in High-Dimensional Representation
- URL: http://arxiv.org/abs/2311.04872v1
- Date: Wed, 8 Nov 2023 18:19:45 GMT
- Title: Computing with Residue Numbers in High-Dimensional Representation
- Authors: Christopher J. Kymn, Denis Kleyko, E. Paxon Frady, Connor Bybee,
Pentti Kanerva, Friedrich T. Sommer, and Bruno A. Olshausen
- Abstract summary: We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors.
We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements.
- Score: 7.736925756277564
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce Residue Hyperdimensional Computing, a computing framework that
unifies residue number systems with an algebra defined over random,
high-dimensional vectors. We show how residue numbers can be represented as
high-dimensional vectors in a manner that allows algebraic operations to be
performed with component-wise, parallelizable operations on the vector
elements. The resulting framework, when combined with an efficient method for
factorizing high-dimensional vectors, can represent and operate on numerical
values over a large dynamic range using vastly fewer resources than previous
methods, and it exhibits impressive robustness to noise. We demonstrate the
potential for this framework to solve computationally difficult problems in
visual perception and combinatorial optimization, showing improvement over
baseline methods. More broadly, the framework provides a possible account for
the computational operations of grid cells in the brain, and it suggests new
machine learning architectures for representing and manipulating numerical
data.
Related papers
- MemoryFormer: Minimize Transformer Computation by Removing Fully-Connected Layers [43.39466934693055]
We present MemoryFormer, a novel transformer architecture which significantly reduces the computational complexity (FLOPs) from a new perspective.
This is made possible by utilizing an alternative method for feature transformation to replace the linear projection of fully-connected layers.
We conduct extensive experiments on various benchmarks to demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2024-11-20T02:41:53Z) - Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Knowledge Composition using Task Vectors with Learned Anisotropic Scaling [51.4661186662329]
We introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level.
We show that such linear combinations explicitly exploit the low intrinsicity of pre-trained models, with only a few coefficients being the learnable parameters.
We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives.
arXiv Detail & Related papers (2024-07-03T07:54:08Z) - Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures [6.473177443214531]
We introduce a new variant of the resonator network, based on self-attention based update rules in iterative search problem.
Our algorithm enables a larger capacity for associative memory, enabling applications in many tasks like perception based pattern recognition, scene decomposition, and object reasoning.
arXiv Detail & Related papers (2024-03-20T00:37:19Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - In-memory factorization of holographic perceptual representations [14.621617156897301]
Disentanglement of constituent factors of a sensory signal is central to perception and cognition.
We present a compute engine capable of efficiently factorizing holographic perceptual representations.
arXiv Detail & Related papers (2022-11-09T17:36:06Z) - Efficient Inference via Universal LSH Kernel [35.22983601434134]
We propose mathematically provable Representer Sketch, a concise set of count arrays that can approximate the inference procedure with simple hashing computations and aggregations.
Representer Sketch builds upon the popular Representer Theorem from kernel literature, hence the name.
We show that Representer Sketch achieves up to 114x reduction in storage requirement and 59x reduction in complexity without any drop in accuracy.
arXiv Detail & Related papers (2021-06-21T22:06:32Z) - Coded Distributed Computing with Partial Recovery [56.08535873173518]
We introduce a novel coded matrix-vector multiplication scheme, called coded computation with partial recovery (CCPR)
CCPR reduces both the computation time and the decoding complexity by allowing a trade-off between the accuracy and the speed of computation.
We then extend this approach to distributed implementation of more general computation tasks by proposing a coded communication scheme with partial recovery.
arXiv Detail & Related papers (2020-07-04T21:34:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.