Transformer with Fourier Integral Attentions
- URL: http://arxiv.org/abs/2206.00206v1
- Date: Wed, 1 Jun 2022 03:06:21 GMT
- Title: Transformer with Fourier Integral Attentions
- Authors: Tan Nguyen and Minh Pham and Tam Nguyen and Khai Nguyen and Stanley J.
Osher and Nhat Ho
- Abstract summary: We propose a new class of transformers in which the dot-product kernels are replaced by the novel generalized Fourier integral kernels.
Compared to the conventional transformers with dot-product attention, FourierFormers attain better accuracy and reduce the redundancy between attention heads.
We empirically corroborate the advantages of FourierFormers over the baseline transformers in a variety of practical applications including language modeling and image classification.
- Score: 18.031977028559282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-head attention empowers the recent success of transformers, the
state-of-the-art models that have achieved remarkable success in sequence
modeling and beyond. These attention mechanisms compute the pairwise dot
products between the queries and keys, which results from the use of
unnormalized Gaussian kernels with the assumption that the queries follow a
mixture of Gaussian distribution. There is no guarantee that this assumption is
valid in practice. In response, we first interpret attention in transformers as
a nonparametric kernel regression. We then propose the FourierFormer, a new
class of transformers in which the dot-product kernels are replaced by the
novel generalized Fourier integral kernels. Different from the dot-product
kernels, where we need to choose a good covariance matrix to capture the
dependency of the features of data, the generalized Fourier integral kernels
can automatically capture such dependency and remove the need to tune the
covariance matrix. We theoretically prove that our proposed Fourier integral
kernels can efficiently approximate any key and query distributions. Compared
to the conventional transformers with dot-product attention, FourierFormers
attain better accuracy and reduce the redundancy between attention heads. We
empirically corroborate the advantages of FourierFormers over the baseline
transformers in a variety of practical applications including language modeling
and image classification.
Related papers
- New random projections for isotropic kernels using stable spectral distributions [0.0]
We decompose spectral kernel distributions as a scale mixture of $alpha$-stable random vectors.
Results have broad applications for support vector machines, kernel ridge regression, and other kernel-based machine learning techniques.
arXiv Detail & Related papers (2024-11-05T03:28:01Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Solving High Frequency and Multi-Scale PDEs with Gaussian Processes [18.190228010565367]
PINNs often struggle to solve high-frequency and multi-scale PDEs.
We resort to the Gaussian process (GP) framework to solve this problem.
We use Kronecker product properties and multilinear algebra to promote computational efficiency and scalability.
arXiv Detail & Related papers (2023-11-08T05:26:58Z) - Kernel Learning by quantum annealer [0.966840768820136]
We propose an application of the Boltzmann machine to the kernel matrix used in various machine-learning techniques.
We show that it is possible to create a spectral distribution that could not be feasible with the Gaussian distribution.
arXiv Detail & Related papers (2023-04-20T08:08:03Z) - Deep Fourier Up-Sampling [100.59885545206744]
Up-sampling in the Fourier domain is more challenging as it does not follow such a local property.
We propose a theoretically sound Deep Fourier Up-Sampling (FourierUp) to solve these issues.
arXiv Detail & Related papers (2022-10-11T06:17:31Z) - Unified Fourier-based Kernel and Nonlinearity Design for Equivariant
Networks on Homogeneous Spaces [52.424621227687894]
We introduce a unified framework for group equivariant networks on homogeneous spaces.
We take advantage of the sparsity of Fourier coefficients of the lifted feature fields.
We show that other methods treating features as the Fourier coefficients in the stabilizer subgroup are special cases of our activation.
arXiv Detail & Related papers (2022-06-16T17:59:01Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases [73.53227696624306]
We present a new family of algorithms for learning Fourier-sparse set functions.
In contrast to other work that focused on the Walsh-Hadamard transform, our novel algorithms operate with recently introduced non-orthogonal Fourier transforms.
We demonstrate effectiveness on several real-world applications.
arXiv Detail & Related papers (2020-10-01T14:31:59Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.