Linearithmic Clean-up for Vector-Symbolic Key-Value Memory with Kroneker Rotation Products
- URL: http://arxiv.org/abs/2506.15793v1
- Date: Wed, 18 Jun 2025 18:23:28 GMT
- Title: Linearithmic Clean-up for Vector-Symbolic Key-Value Memory with Kroneker Rotation Products
- Authors: Ruipeng Liu, Qinru Qiu, Simon Khan, Garrett E. Katz,
- Abstract summary: A computational bottleneck in current Vector-Symbolic Architectures is the clean-up'' step.<n>We present a new codebook representation that supports efficient clean-up.<n>The resulting clean-up time complexity is linearithmic, i.e. $mathcalO(N,textlog,N)$.
- Score: 4.502446902578007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A computational bottleneck in current Vector-Symbolic Architectures (VSAs) is the ``clean-up'' step, which decodes the noisy vectors retrieved from the architecture. Clean-up typically compares noisy vectors against a ``codebook'' of prototype vectors, incurring computational complexity that is quadratic or similar. We present a new codebook representation that supports efficient clean-up, based on Kroneker products of rotation-like matrices. The resulting clean-up time complexity is linearithmic, i.e. $\mathcal{O}(N\,\text{log}\,N)$, where $N$ is the vector dimension and also the number of vectors in the codebook. Clean-up space complexity is $\mathcal{O}(N)$. Furthermore, the codebook is not stored explicitly in computer memory: It can be represented in $\mathcal{O}(\text{log}\,N)$ space, and individual vectors in the codebook can be materialized in $\mathcal{O}(N)$ time and space. At the same time, asymptotic memory capacity remains comparable to standard approaches. Computer experiments confirm these results, demonstrating several orders of magnitude more scalability than baseline VSA techniques.
Related papers
- Dimension reduction with structure-aware quantum circuits for hybrid machine learning [0.0]
Schmidt decomposition of a vector can be understood as writing the singular value decomposition (SVD) in vector form.<n>We show that quantum circuits designed on a value $k$ can approximate the reduced-form representations of entire datasets.
arXiv Detail & Related papers (2025-07-31T17:18:43Z) - A Walsh Hadamard Derived Linear Vector Symbolic Architecture [83.27945465029167]
Symbolic Vector Architectures (VSAs) are an approach to developing Neuro-symbolic AI.
HLB is designed to have favorable computational efficiency, and efficacy in classic VSA tasks.
arXiv Detail & Related papers (2024-10-30T03:42:59Z) - Dictionary-based Block Encoding of Sparse Matrices with Low Subnormalization and Circuit Depth [2.4487770108795393]
We propose an efficient block-encoding protocol for sparse matrices based on a novel data structure.<n>Non-zero elements with the same values belong to the same classification in our block-encoding protocol's dictionary.<n>Our protocol connects to linear combinations of unitaries (LCU) and the sparse access input model (SAIM)
arXiv Detail & Related papers (2024-05-28T09:49:58Z) - Matrix Compression via Randomized Low Rank and Low Precision
Factorization [47.902465710511485]
Modern matrices can involve billions of elements, making their storage and processing quite demanding in terms of computational resources and memory usage.
We propose an algorithm that exploits this structure to obtain a low rank decomposition of any matrix $mathbfA$ as $mathbfLmathbfR$.
We empirically demonstrate the efficacy of our algorithm in image compression, nearest neighbor classification of image and text embeddings, and compressing the layers of LlaMa-$7$b.
arXiv Detail & Related papers (2023-10-17T06:56:57Z) - Online Clustered Codebook [100.1650001618827]
We present a simple alternative method for online codebook learning, Clustering VQ-VAE (CVQ-VAE)
Our approach selects encoded features as anchors to update the dead'' codevectors, while optimising the codebooks which are alive via the original loss.
Our CVQ-VAE can be easily integrated into the existing models with just a few lines of code.
arXiv Detail & Related papers (2023-07-27T18:31:04Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Factorizers for Distributed Sparse Block Codes [45.29870215671697]
We propose a fast and highly accurate method for factorizing distributed block codes (SBCs)
Our iterative factorizer introduces a threshold-based nonlinear activation, conditional random sampling, and an $ell_infty$-based similarity metric.
We demonstrate the feasibility of our method on four deep CNN architectures over CIFAR-100, ImageNet-1K, and RAVEN datasets.
arXiv Detail & Related papers (2023-03-24T12:31:48Z) - Batch-efficient EigenDecomposition for Small and Medium Matrices [65.67315418971688]
EigenDecomposition (ED) is at the heart of many computer vision algorithms and applications.
We propose a QR-based ED method dedicated to the application scenarios of computer vision.
arXiv Detail & Related papers (2022-07-09T09:14:12Z) - On the well-spread property and its relation to linear regression [4.619541348328937]
We show that consistent recovery of the parameter vector in a robust linear regression model is information-theoretically impossible.
We show that it is possible to efficiently certify whether a given $n$-by-$d$ matrix is well-spread if the number of observations is quadratic in the ambient dimension.
arXiv Detail & Related papers (2022-06-16T11:17:44Z) - Orthogonal Matrices for MBAT Vector Symbolic Architectures, and a "Soft"
VSA Representation for JSON [0.0]
Vector Architectures (VSAs) give a way to represent a complex object as a single fixed-length vector, so that similar objects have similar vector representations.
We review a previously proposed VSA method, MBAT (Matrix Binding of Additive Terms), which uses by random matrices for binding related terms.
arXiv Detail & Related papers (2022-02-08T18:41:32Z) - Impossibility Results for Grammar-Compressed Linear Algebra [14.85374185122389]
To handle vast amounts of data, it is natural and popular to compress vectors and matrices.
This paper asks if we can run our computations on the compressed data as efficiently as if the original data was that small.
We consider the most basic linear algebra operations: inner product, matrix-vector multiplication, and matrix multiplication.
arXiv Detail & Related papers (2020-10-27T10:33:01Z) - Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and
Graph Problems [58.83118651518438]
We consider the general problem of learning about a matrix through vector-matrix-vector queries.
These queries provide the value of $boldsymbolumathrmTboldsymbolMboldsymbolv$ over a fixed field.
We provide new upper and lower bounds for a wide variety of problems, spanning linear algebra, statistics, and graphs.
arXiv Detail & Related papers (2020-06-24T19:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.