SOFT: Softmax-free Transformer with Linear Complexity
- URL: http://arxiv.org/abs/2110.11945v1
- Date: Fri, 22 Oct 2021 17:57:29 GMT
- Title: SOFT: Softmax-free Transformer with Linear Complexity
- Authors: Jiachen Lu, Jinghan Yao, Junge Zhang, Xiatian Zhu, Hang Xu, Weiguo
Gao, Chunjing Xu, Tao Xiang, Li Zhang
- Abstract summary: Vision transformers (ViTs) have pushed the state-of-the-art for various visual recognition tasks by patch-wise image tokenization followed by self-attention.
Various attempts on approximating the self-attention with linear complexity have been made in Natural Language Processing.
We identify that their limitations are rooted in keeping the softmax self-attention during approximations.
For the first time, a softmax-free transformer or SOFT is proposed.
- Score: 112.9754491864247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision transformers (ViTs) have pushed the state-of-the-art for various
visual recognition tasks by patch-wise image tokenization followed by
self-attention. However, the employment of self-attention modules results in a
quadratic complexity in both computation and memory usage. Various attempts on
approximating the self-attention computation with linear complexity have been
made in Natural Language Processing. However, an in-depth analysis in this work
shows that they are either theoretically flawed or empirically ineffective for
visual recognition. We further identify that their limitations are rooted in
keeping the softmax self-attention during approximations. Specifically,
conventional self-attention is computed by normalizing the scaled dot-product
between token feature vectors. Keeping this softmax operation challenges any
subsequent linearization efforts. Based on this insight, for the first time, a
softmax-free transformer or SOFT is proposed. To remove softmax in
self-attention, Gaussian kernel function is used to replace the dot-product
similarity without further normalization. This enables a full self-attention
matrix to be approximated via a low-rank matrix decomposition. The robustness
of the approximation is achieved by calculating its Moore-Penrose inverse using
a Newton-Raphson method. Extensive experiments on ImageNet show that our SOFT
significantly improves the computational efficiency of existing ViT variants.
Crucially, with a linear complexity, much longer token sequences are permitted
in SOFT, resulting in superior trade-off between accuracy and complexity.
Related papers
- Multi-Layer Transformers Gradient Can be Approximated in Almost Linear Time [17.086679273053853]
We show that a novel fast approximation method can calculate the gradients in almost linear time.
By improving the efficiency of gradient, we hope that this work will facilitate more effective training and deployment of long-context language models.
arXiv Detail & Related papers (2024-08-23T17:16:43Z) - Compute-Efficient Medical Image Classification with Softmax-Free Transformers and Sequence Normalization [1.6275928583134276]
The Transformer model has been pivotal in advancing fields such as natural language processing, speech recognition, and computer vision.
A critical limitation of this model is its quadratic computational and memory complexity relative to the sequence length.
This is especially crucial in medical imaging where high-resolution images can reach gigapixel scale.
arXiv Detail & Related papers (2024-06-03T13:27:08Z) - Convex Bounds on the Softmax Function with Applications to Robustness
Verification [69.09991317119679]
The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.
This paper provides convex lower bounds and concave upper bounds on the softmax function, which are compatible with convex optimization formulations for characterizing neural networks and other ML models.
arXiv Detail & Related papers (2023-03-03T05:07:02Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization [44.54772242784423]
We develop an efficient non regularization algorithm for low-rank learning matrix.
The proposed algorithm can avoid expensive folding/unfolding problems.
Experiments show that the proposed algorithm is efficient and more space than the existing state-of-the-world.
arXiv Detail & Related papers (2022-05-06T07:47:10Z) - cosFormer: Rethinking Softmax in Attention [60.557869510885205]
kernel methods are often adopted to reduce the complexity by approximating the softmax operator.
Due to the approximation errors, their performances vary in different tasks/corpus and suffer crucial performance drops.
We propose a linear transformer called cosFormer that can achieve comparable or better accuracy to the vanilla transformer.
arXiv Detail & Related papers (2022-02-17T17:53:48Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Stable, Fast and Accurate: Kernelized Attention with Relative Positional
Encoding [63.539333383965726]
We propose a novel way to accelerate attention calculation for Transformers with relative positional encoding (RPE)
Based upon the observation that relative positional encoding forms a Toeplitz matrix, we mathematically show that kernelized attention with RPE can be calculated efficiently using Fast Fourier Transform (FFT)
arXiv Detail & Related papers (2021-06-23T17:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.