Linear Codes for Hyperdimensional Computing
- URL: http://arxiv.org/abs/2403.03278v1
- Date: Tue, 5 Mar 2024 19:18:44 GMT
- Title: Linear Codes for Hyperdimensional Computing
- Authors: Netanel Raviv
- Abstract summary: We show that random linear codes offer a rich subcode structure that can be used to form key-value stores.
We show that under the framework we develop, random linear codes admit simple recovery algorithms to factor (either bundled or bound) compositional representations.
- Score: 9.7902367664742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperdimensional Computing (HDC) is an emerging computational paradigm for
representing compositional information as high-dimensional vectors, and has a
promising potential in applications ranging from machine learning to
neuromorphic computing. One of the long-standing challenges in HDC is factoring
a compositional representation to its constituent factors, also known as the
recovery problem. In this paper we take a novel approach to solve the recovery
problem, and propose the use of random linear codes. These codes are subspaces
over the Boolean field, and are a well-studied topic in information theory with
various applications in digital communication. We begin by showing that
hyperdimensional encoding using random linear codes retains favorable
properties of the prevalent (ordinary) random codes, and hence HD
representations using the two methods have comparable information storage
capabilities. We proceed to show that random linear codes offer a rich subcode
structure that can be used to form key-value stores, which encapsulate most use
cases of HDC. Most importantly, we show that under the framework we develop,
random linear codes admit simple recovery algorithms to factor (either bundled
or bound) compositional representations. The former relies on constructing
certain linear equation systems over the Boolean field, the solution to which
reduces the search space dramatically and strictly outperforms exhaustive
search in many cases. The latter employs the subspace structure of these codes
to achieve provably correct factorization. Both methods are strictly faster
than the state-of-the-art resonator networks, often by an order of magnitude.
We implemented our techniques in Python using a benchmark software library, and
demonstrated promising experimental results.
Related papers
- Decoding Quasi-Cyclic Quantum LDPC Codes [23.22566380210149]
Quantum low-density parity-check (qLDPC) codes are an important component in the quest for fault tolerance.
Recent progress on qLDPC codes has led to constructions which are quantumally good, and which admit linear-time decoders to correct errors affecting a constant fraction of codeword qubits.
In practice, the surface/toric codes, which are the product of two repetition codes, are still often the qLDPC codes of choice.
arXiv Detail & Related papers (2024-11-07T06:25:27Z) - Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Injective Rank Metric Trapdoor Functions with Homogeneous Errors [2.117778717665161]
In rank-metric cryptography, a vector from a finite dimensional linear space over a finite field is viewed as the linear space spanned by its entries.
We introduce a new construction of injective one-way trapdoor functions.
Our solution departs from the frequent way of building public key primitives from error-correcting codes.
arXiv Detail & Related papers (2023-10-13T09:12:25Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding
for Ising MRF Models: Classical and Quantum Topology Machine Learning [0.11805137592431453]
The paper introduces the application of information geometry to describe the ground states of Ising models.
The approach establishes a connection between machine learning and error-correcting coding.
arXiv Detail & Related papers (2023-07-28T19:38:13Z) - Streaming Encoding Algorithms for Scalable Hyperdimensional Computing [12.829102171258882]
Hyperdimensional computing (HDC) is a paradigm for data representation and learning originating in computational neuroscience.
In this work, we explore a family of streaming encoding techniques based on hashing.
We show formally that these methods enjoy comparable guarantees on performance for learning applications while being substantially more efficient than existing alternatives.
arXiv Detail & Related papers (2022-09-20T17:25:14Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.