A Fast Optimization View: Reformulating Single Layer Attention in LLM
Based on Tensor and SVM Trick, and Solving It in Matrix Multiplication Time
- URL: http://arxiv.org/abs/2309.07418v1
- Date: Thu, 14 Sep 2023 04:23:40 GMT
- Title: A Fast Optimization View: Reformulating Single Layer Attention in LLM
Based on Tensor and SVM Trick, and Solving It in Matrix Multiplication Time
- Authors: Yeqi Gao, Zhao Song, Weixin Wang, Junze Yin
- Abstract summary: We focus on giving a provable guarantee for the one-layer attention network objective function $L(X,Y).
In a multi-layer LLM network, the matrix $B in mathbbRn times d2$ can be viewed as the output of a layer.
We provide an iterative algorithm to train loss function $L(X,Y)$ up $epsilon$ that runs in $widetildeO( (cal T_mathrmmat(n,d) + d
- Score: 7.613259578185218
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language models (LLMs) have played a pivotal role in revolutionizing
various facets of our daily existence. Solving attention regression is a
fundamental task in optimizing LLMs. In this work, we focus on giving a
provable guarantee for the one-layer attention network objective function
$L(X,Y) = \sum_{j_0 = 1}^n \sum_{i_0 = 1}^d ( \langle \langle \exp(
\mathsf{A}_{j_0} x ) , {\bf 1}_n \rangle^{-1} \exp( \mathsf{A}_{j_0} x ), A_{3}
Y_{*,i_0} \rangle - b_{j_0,i_0} )^2$. Here $\mathsf{A} \in \mathbb{R}^{n^2
\times d^2}$ is Kronecker product between $A_1 \in \mathbb{R}^{n \times d}$ and
$A_2 \in \mathbb{R}^{n \times d}$. $A_3$ is a matrix in $\mathbb{R}^{n \times
d}$, $\mathsf{A}_{j_0} \in \mathbb{R}^{n \times d^2}$ is the $j_0$-th block of
$\mathsf{A}$. The $X, Y \in \mathbb{R}^{d \times d}$ are variables we want to
learn. $B \in \mathbb{R}^{n \times d}$ and $b_{j_0,i_0} \in \mathbb{R}$ is one
entry at $j_0$-th row and $i_0$-th column of $B$, $Y_{*,i_0} \in \mathbb{R}^d$
is the $i_0$-column vector of $Y$, and $x \in \mathbb{R}^{d^2}$ is the
vectorization of $X$.
In a multi-layer LLM network, the matrix $B \in \mathbb{R}^{n \times d}$ can
be viewed as the output of a layer, and $A_1= A_2 = A_3 \in \mathbb{R}^{n
\times d}$ can be viewed as the input of a layer. The matrix version of $x$ can
be viewed as $QK^\top$ and $Y$ can be viewed as $V$. We provide an iterative
greedy algorithm to train loss function $L(X,Y)$ up $\epsilon$ that runs in
$\widetilde{O}( ({\cal T}_{\mathrm{mat}}(n,n,d) + {\cal
T}_{\mathrm{mat}}(n,d,d) + d^{2\omega}) \log(1/\epsilon) )$ time. Here ${\cal
T}_{\mathrm{mat}}(a,b,c)$ denotes the time of multiplying $a \times b$ matrix
another $b \times c$ matrix, and $\omega\approx 2.37$ denotes the exponent of
matrix multiplication.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions [54.54897832889028]
We show that for any $K$, there is a universal set" $U subset [n]$ of size independent of $n$, such that for any $Q$ and any row $i$, the large attention scores $A_i,j$ in row $i$ of $A$ all have $jin U$.
We empirically show the benefits of our scheme for vision transformers, showing how to train new models that use our universal set while training as well.
arXiv Detail & Related papers (2024-10-07T19:47:13Z) - Efficient Matrix Factorization Via Householder Reflections [2.3326951882644553]
We show that the exact recovery of the factors $mathbfH$ and $mathbfX$ from $mathbfY$ is guaranteed with $Omega$ columns in $mathbfY$.
We hope the techniques in this work help in developing alternate algorithms for dictionary learning.
arXiv Detail & Related papers (2024-05-13T11:13:49Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Randomized and Deterministic Attention Sparsification Algorithms for
Over-parameterized Feature Dimension [18.57735939471469]
We consider the sparsification of the attention problem.
For any super large feature dimension, we can reduce it down to the size nearly linear in length of sentence.
arXiv Detail & Related papers (2023-04-10T05:52:38Z) - Fast Attention Requires Bounded Entries [19.17278873525312]
inner product attention computation is a fundamental task for training large language models such as Transformer, GPT-1, BERT, GPT-2, GPT-3 and ChatGPT.
We investigate whether faster algorithms are possible by implicitly making use of the matrix $A$.
This gives a theoretical explanation for the phenomenon observed in practice that attention computation is much more efficient when the input matrices have smaller entries.
arXiv Detail & Related papers (2023-02-26T02:42:39Z) - A Nearly-Optimal Bound for Fast Regression with $\ell_\infty$ Guarantee [16.409210914237086]
Given a matrix $Ain mathbbRntimes d$ and a tensor $bin mathbbRn$, we consider the regression problem with $ell_infty$ guarantees.
We show that in order to obtain such $ell_infty$ guarantee for $ell$ regression, one has to use sketching matrices that are dense.
We also develop a novel analytical framework for $ell_infty$ guarantee regression that utilizes the Oblivious Coordinate-wise Embedding (OCE) property
arXiv Detail & Related papers (2023-02-01T05:22:40Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Algorithms and Hardness for Linear Algebra on Geometric Graphs [14.822517769254352]
We show that the exponential dependence on the dimension dimension $d in the celebrated fast multipole method of Greengard and Rokhlin cannot be improved.
This is the first formal limitation proven about fast multipole methods.
arXiv Detail & Related papers (2020-11-04T18:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.