A Theoretical Understanding of Shallow Vision Transformers: Learning,
Generalization, and Sample Complexity
- URL: http://arxiv.org/abs/2302.06015v3
- Date: Sun, 12 Nov 2023 04:36:45 GMT
- Title: A Theoretical Understanding of Shallow Vision Transformers: Learning,
Generalization, and Sample Complexity
- Authors: Hongkang Li, Meng Wang, Sijia Liu, Pin-yu Chen
- Abstract summary: ViTs with self-attention modules have recently achieved great empirical success in many tasks.
However, theoretical learning generalization analysis is mostly noisy and elusive.
This paper provides the first theoretical analysis of a shallow ViT for a classification task.
- Score: 71.11795737362459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision Transformers (ViTs) with self-attention modules have recently achieved
great empirical success in many vision tasks. Due to non-convex interactions
across layers, however, theoretical learning and generalization analysis is
mostly elusive. Based on a data model characterizing both label-relevant and
label-irrelevant tokens, this paper provides the first theoretical analysis of
training a shallow ViT, i.e., one self-attention layer followed by a two-layer
perceptron, for a classification task. We characterize the sample complexity to
achieve a zero generalization error. Our sample complexity bound is positively
correlated with the inverse of the fraction of label-relevant tokens, the token
noise level, and the initial model error. We also prove that a training process
using stochastic gradient descent (SGD) leads to a sparse attention map, which
is a formal verification of the general intuition about the success of
attention. Moreover, this paper indicates that a proper token sparsification
can improve the test performance by removing label-irrelevant and/or noisy
tokens, including spurious correlations. Empirical experiments on synthetic
data and CIFAR-10 dataset justify our theoretical results and generalize to
deeper ViTs.
Related papers
- What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - COSST: Multi-organ Segmentation with Partially Labeled Datasets Using
Comprehensive Supervisions and Self-training [15.639976408273784]
Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated.
It is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential.
We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training.
arXiv Detail & Related papers (2023-04-27T08:55:34Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Contrastive variational information bottleneck for aspect-based
sentiment analysis [36.83876224466177]
We propose to reduce spurious correlations for aspect-based sentiment analysis (ABSA) via a novel Contrastive Variational Information Bottleneck framework (called CVIB)
The proposed CVIB framework is composed of an original network and a self-pruned network, and these two networks are optimized simultaneously via contrastive learning.
Our approach achieves better performance than the strong competitors in terms of overall prediction performance, robustness, and generalization.
arXiv Detail & Related papers (2023-03-06T02:52:37Z) - The Trade-off between Universality and Label Efficiency of
Representations from Contrastive Learning [32.15608637930748]
We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously.
We provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks, it puts less emphasis on task-specific features.
arXiv Detail & Related papers (2023-02-28T22:14:33Z) - Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
Networks [89.28881869440433]
This paper provides the first theoretical characterization of joint edge-model sparse learning for graph neural networks (GNNs)
It proves analytically that both sampling important nodes and pruning neurons with the lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy.
arXiv Detail & Related papers (2023-02-06T16:54:20Z) - Generalizable Information Theoretic Causal Representation [37.54158138447033]
We propose to learn causal representation from observational data by regularizing the learning procedure with mutual information measures according to our hypothetical causal graph.
The optimization involves a counterfactual loss, based on which we deduce a theoretical guarantee that the causality-inspired learning is with reduced sample complexity and better generalization ability.
arXiv Detail & Related papers (2022-02-17T00:38:35Z) - A Theoretical Analysis of Learning with Noisily Labeled Data [62.946840431501855]
We first show that in the first epoch training, the examples with clean labels will be learned first.
We then show that after the learning from clean data stage, continuously training model can achieve further improvement in testing error.
arXiv Detail & Related papers (2021-04-08T23:40:02Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.