Sparse and Continuous Attention Mechanisms
- URL: http://arxiv.org/abs/2006.07214v3
- Date: Thu, 29 Oct 2020 08:39:54 GMT
- Title: Sparse and Continuous Attention Mechanisms
- Authors: Andr\'e F. T. Martins, Ant\'onio Farinhas, Marcos Treviso, Vlad
Niculae, Pedro M. Q. Aguiar, M\'ario A. T. Figueiredo
- Abstract summary: We introduce continuous-domain attention mechanisms, deriving efficient gradient backpropagation algorithms for alpha in 1,2.
Experiments on attention-based text classification, machine translation, and visual question answering illustrate the use of continuous attention in 1D and 2D.
- Score: 14.941013982958209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exponential families are widely used in machine learning; they include many
distributions in continuous and discrete domains (e.g., Gaussian, Dirichlet,
Poisson, and categorical distributions via the softmax transformation).
Distributions in each of these families have fixed support. In contrast, for
finite domains, there has been recent work on sparse alternatives to softmax
(e.g. sparsemax and alpha-entmax), which have varying support, being able to
assign zero probability to irrelevant categories. This paper expands that work
in two directions: first, we extend alpha-entmax to continuous domains,
revealing a link with Tsallis statistics and deformed exponential families.
Second, we introduce continuous-domain attention mechanisms, deriving efficient
gradient backpropagation algorithms for alpha in {1,2}. Experiments on
attention-based text classification, machine translation, and visual question
answering illustrate the use of continuous attention in 1D and 2D, showing that
it allows attending to time intervals and compact regions.
Related papers
- MultiMax: Sparse and Multi-Modal Attention Learning [60.49318008131978]
SoftMax is a ubiquitous ingredient of modern machine learning algorithms.
We show that sparsity can be achieved by a family of SoftMax variants, but they often require an alternative loss function and do not preserve multi-modality.
We propose MultiMax, which adaptively modulates the output distribution according to input entry range.
arXiv Detail & Related papers (2024-06-03T10:51:43Z) - Hierarchical Multiresolution Feature- and Prior-based Graphs for
Classification [3.1219977244201056]
We formulated the classification problem on three variants of multiresolution neighborhood graphs and the graph of a hierarchical conditional random field.
Each of these graphs was weighted and undirected and could thus incorporate the spatial or hierarchical relationships in all directions.
It expanded on a random walker graph by using novel mechanisms to derive the edge weights of its spatial feature-based subgraph.
arXiv Detail & Related papers (2023-06-03T15:58:38Z) - Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features [65.64276393443346]
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints.
We present two new variants of the algorithms for minimization of the finite-sum gradient.
arXiv Detail & Related papers (2023-04-23T20:05:09Z) - Efficient Graph Field Integrators Meet Point Clouds [59.27295475120132]
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
The first class, SeparatorFactorization(SF), leverages the bounded genus of point cloud mesh graphs, while the second class, RFDiffusion(RFD), uses popular epsilon-nearest-neighbor graph representations for point clouds.
arXiv Detail & Related papers (2023-02-02T08:33:36Z) - On the Normalizing Constant of the Continuous Categorical Distribution [24.015934908123928]
A novel family of such distributions has been discovered: the continuous categorical.
In spite of this mathematical simplicity, our understanding of the normalizing constant remains far from complete.
We present theoretical and methodological advances that can, in turn, help to enable broader applications of the continuous categorical distribution.
arXiv Detail & Related papers (2022-04-28T05:06:12Z) - Kernel Deformed Exponential Families for Sparse Continuous Attention [76.61129971916702]
We show existence results for kernel exponential and deformed exponential families.
Experiments show that kernel deformed exponential families can attend to multiple compact regions of the data domain.
arXiv Detail & Related papers (2021-11-01T19:21:22Z) - Evidential Softmax for Sparse Multimodal Distributions in Deep
Generative Models [38.26333732364642]
We present $textitev-softmax$, a sparse normalization function that preserves the multimodality of probability distributions.
We evaluate our method on a variety of generative models, including variational autoencoders and auto-regressive architectures.
arXiv Detail & Related papers (2021-10-27T05:32:25Z) - Sparse Continuous Distributions and Fenchel-Young Losses [28.52737451408056]
We extend $Omega$-regularized prediction maps and Fenchel-Young losses to arbitrary domains.
For quadratic energy functions in continuous domains, the resulting densities are $beta$-Gaussians.
We demonstrate our sparse continuous distributions for attention-based audio classification and visual question answering.
arXiv Detail & Related papers (2021-08-04T12:07:18Z) - Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields [53.31927549039624]
We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
arXiv Detail & Related papers (2021-07-13T12:31:06Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.