Sublinear Time Quantum Algorithm for Attention Approximation
- URL: http://arxiv.org/abs/2602.00874v1
- Date: Sat, 31 Jan 2026 19:33:52 GMT
- Title: Sublinear Time Quantum Algorithm for Attention Approximation
- Authors: Zhao Song, Jianfei Xue, Jiahao Zhang, Lichen Zhang,
- Abstract summary: We propose a quantum data structure that approximates any row of $mathrmAtt(Q, K, V)$ using only row queries to $Q, K, V$.<n>Our algorithm preprocesses these matrices in $widetildeOleft( -1 n0.5 left( s_2.5 + s_1.5 d + 0.5 d right)$ time, where $$ is the target accuracy, $s_$ is the $$-statistical dimension of the exponential kernel
- Score: 13.665266438908533
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given the query, key and value matrices $Q, K, V\in \mathbb{R}^{n\times d}$, the attention module is defined as $\mathrm{Att}(Q, K, V)=D^{-1}AV$ where $A=\exp(QK^\top/\sqrt{d})$ with $\exp(\cdot)$ applied entrywise, $D=\mathrm{diag}(A{\bf 1}_n)$. The attention module is the backbone of modern transformers and large language models, but explicitly forming the softmax matrix $D^{-1}A$ incurs $Ω(n^2)$ time, motivating numerous approximation schemes that reduce runtime to $\widetilde O(nd)$ via sparsity or low-rank factorization. We propose a quantum data structure that approximates any row of $\mathrm{Att}(Q, K, V)$ using only row queries to $Q, K, V$. Our algorithm preprocesses these matrices in $\widetilde{O}\left( ε^{-1} n^{0.5} \left( s_λ^{2.5} + s_λ^{1.5} d + α^{0.5} d \right) \right)$ time, where $ε$ is the target accuracy, $s_λ$ is the $λ$-statistical dimension of the exponential kernel defined by $Q$ and $K$, and $α$ measures the row distortion of $V$ that is at most $d/{\rm srank}(V)$, the stable rank of $V$. Each row query can be answered in $\widetilde{O}(s_λ^2 + s_λd)$ time. To our knowledge, this is the first quantum data structure that approximates rows of the attention matrix in sublinear time with respect to $n$. Our approach relies on a quantum Nyström approximation of the exponential kernel, quantum multivariate mean estimation for computing $D$, and quantum leverage score sampling for the multiplication with $V$.
Related papers
- Spectral Gaps with Quantum Counting Queries and Oblivious State Preparation [47.600794349481966]
In this work, we present a quantum algorithm which approximates values up to additive error $epsilonDelta_k$ using a logarithmic number of qubits.<n>A key technical step in the analysis is the preparation of a suitable random initial state, which ultimately allows us to efficiently count the number of eigenvalues that are smaller than a threshold.
arXiv Detail & Related papers (2025-08-28T17:04:18Z) - Improved Algorithms for Kernel Matrix-Vector Multiplication Under Sparsity Assumptions [23.539428616884035]
We study fast algorithms for computing matrix-vector products for asymmetric Gaussian Kernel matrices $Kin mathbbRntimes n$.<n>Our algorithms rely on the following modelling assumption about the $K$: the sum of the entries of $K$ scales linearly in $n$, as opposed to the worst case growth.<n>We obtain the first subquadratic-time algorithm that works under this assumption, for unrestricted computation.
arXiv Detail & Related papers (2025-07-31T13:29:43Z) - LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions [54.54897832889028]
We show that for any $K$, there is a universal set" $U subset [n]$ of size independent of $n$, such that for any $Q$ and any row $i$, the large attention scores $A_i,j$ in row $i$ of $A$ all have $jin U$.
We empirically show the benefits of our scheme for vision transformers, showing how to train new models that use our universal set while training as well.
arXiv Detail & Related papers (2024-10-07T19:47:13Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Hardness of Low Rank Approximation of Entrywise Transformed Matrix
Products [9.661328973620531]
Inspired by fast algorithms in natural language processing, we study low rank approximation in the entrywise transformed setting.
We give novel reductions from the Strong Exponential Time Hypothesis (SETH) that rely on lower bounding the leverage scores of flat sparse vectors.
Since our low rank algorithms rely on matrix-vectors, our lower bounds extend to show that computing $f(UV)W$, for even a small matrix $W$, requires $Omega(n2-o(1))$ time.
arXiv Detail & Related papers (2023-11-03T14:56:24Z) - A Quantum Approximation Scheme for k-Means [0.16317061277457]
We give a quantum approximation scheme for the classical $k$-means clustering problem in the QRAM model.<n>Our quantum algorithm runs in time $tildeO left( 2tildeO(frackvarepsilon) eta2 dright)$.<n>Unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines.
arXiv Detail & Related papers (2023-08-16T06:46:37Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Randomized and Deterministic Attention Sparsification Algorithms for
Over-parameterized Feature Dimension [18.57735939471469]
We consider the sparsification of the attention problem.
For any super large feature dimension, we can reduce it down to the size nearly linear in length of sentence.
arXiv Detail & Related papers (2023-04-10T05:52:38Z) - Fast Attention Requires Bounded Entries [19.17278873525312]
inner product attention computation is a fundamental task for training large language models such as Transformer, GPT-1, BERT, GPT-2, GPT-3 and ChatGPT.
We investigate whether faster algorithms are possible by implicitly making use of the matrix $A$.
This gives a theoretical explanation for the phenomenon observed in practice that attention computation is much more efficient when the input matrices have smaller entries.
arXiv Detail & Related papers (2023-02-26T02:42:39Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Learning a Latent Simplex in Input-Sparsity Time [58.30321592603066]
We consider the problem of learning a latent $k$-vertex simplex $KsubsetmathbbRdtimes n$, given access to $AinmathbbRdtimes n$.
We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$.
arXiv Detail & Related papers (2021-05-17T16:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.