Objectives Matter: Understanding the Impact of Self-Supervised
Objectives on Vision Transformer Representations
- URL: http://arxiv.org/abs/2304.13089v1
- Date: Tue, 25 Apr 2023 18:48:23 GMT
- Title: Objectives Matter: Understanding the Impact of Self-Supervised
Objectives on Vision Transformer Representations
- Authors: Shashank Shekhar, Florian Bordes, Pascal Vincent, Ari Morcos
- Abstract summary: We show that reconstruction-based learning features are significantly dissimilar to joint-embedding based learning features.
We find that joint-embedding features yield better linear probe transfer for classification because the different objectives drive different distributions of information.
- Score: 13.437097059358067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Joint-embedding based learning (e.g., SimCLR, MoCo, DINO) and
reconstruction-based learning (e.g., BEiT, SimMIM, MAE) are the two leading
paradigms for self-supervised learning of vision transformers, but they differ
substantially in their transfer performance. Here, we aim to explain these
differences by analyzing the impact of these objectives on the structure and
transferability of the learned representations. Our analysis reveals that
reconstruction-based learning features are significantly dissimilar to
joint-embedding based learning features and that models trained with similar
objectives learn similar features even across architectures. These differences
arise early in the network and are primarily driven by attention and
normalization layers. We find that joint-embedding features yield better linear
probe transfer for classification because the different objectives drive
different distributions of information and invariances in the learned
representation. These differences explain opposite trends in transfer
performance for downstream tasks that require spatial specificity in features.
Finally, we address how fine-tuning changes reconstructive representations to
enable better transfer, showing that fine-tuning re-organizes the information
to be more similar to pre-trained joint embedding models.
Related papers
- Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient [0.49478969093606673]
We introduce refined variants of the Local Learning Coefficient (LLC), a measure of model complexity grounded in singular learning theory.
We study the development of internal structure in transformer language models during training.
arXiv Detail & Related papers (2024-10-03T20:51:02Z) - Unveiling Backbone Effects in CLIP: Exploring Representational Synergies
and Variances [49.631908848868505]
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.
We investigate the differences in CLIP performance among various neural architectures.
We propose a simple, yet effective approach to combine predictions from multiple backbones, leading to a notable performance boost of up to 6.34%.
arXiv Detail & Related papers (2023-12-22T03:01:41Z) - ExpPoint-MAE: Better interpretability and performance for self-supervised point cloud transformers [7.725095281624494]
We evaluate the effectiveness of Masked Autoencoding as a pretraining scheme, and explore Momentum Contrast as an alternative.
We observe that the transformer learns to attend to semantically meaningful regions, indicating that pretraining leads to a better understanding of the underlying geometry.
arXiv Detail & Related papers (2023-06-19T09:38:21Z) - Analyzing Multimodal Objectives Through the Lens of Generative Diffusion
Guidance [34.27851973031995]
We leverage the fact that classifier-guided diffusion models generate images that reflect the semantic signals provided by the classifier.
Specifically, we compare contrastive, matching and captioning loss in terms of their semantic signals, and introduce a simple baseline that not only supports our analyses but also improves the quality of generative guidance.
arXiv Detail & Related papers (2023-02-10T11:17:20Z) - Demystify Transformers & Convolutions in Modern Image Deep Networks [82.32018252867277]
This paper aims to identify the real gains of popular convolution and attention operators through a detailed study.
We find that the key difference among these feature transformation modules, such as attention or convolution, lies in their spatial feature aggregation approach.
Our experiments on various tasks and an analysis of inductive bias show a significant performance boost due to advanced network-level and block-level designs.
arXiv Detail & Related papers (2022-11-10T18:59:43Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Weak Augmentation Guided Relational Self-Supervised Learning [80.0680103295137]
We introduce a novel relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances.
Our proposed method employs sharpened distribution of pairwise similarities among different instances as textitrelation metric.
Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures.
arXiv Detail & Related papers (2022-03-16T16:14:19Z) - Divergent representations of ethological visual inputs emerge from
supervised, unsupervised, and reinforcement learning [20.98896935012773]
We compare the representations learned by eight different convolutional neural networks.
We find that the network trained with reinforcement learning differs most from the other networks.
arXiv Detail & Related papers (2021-12-03T17:18:09Z) - Why Do Self-Supervised Models Transfer? Investigating the Impact of
Invariance on Downstream Tasks [79.13089902898848]
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images.
We show that different tasks in computer vision require features to encode different (in)variances.
arXiv Detail & Related papers (2021-11-22T18:16:35Z) - Do Vision Transformers See Like Convolutional Neural Networks? [45.69780772718875]
Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or even superior performance on image classification tasks.
Are they acting like convolutional networks, or learning entirely different visual representations?
We find striking differences between the two architectures, such as ViT having more uniform representations across all layers.
arXiv Detail & Related papers (2021-08-19T17:27:03Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.