Can Self-Supervised Representation Learning Methods Withstand
Distribution Shifts and Corruptions?
- URL: http://arxiv.org/abs/2308.02525v2
- Date: Fri, 11 Aug 2023 12:31:02 GMT
- Title: Can Self-Supervised Representation Learning Methods Withstand
Distribution Shifts and Corruptions?
- Authors: Prakash Chandra Chhipa, Johan Rodahl Holmgren, Kanjar De, Rajkumar
Saini and Marcus Liwicki
- Abstract summary: Self-supervised learning in computer vision aims to leverage the inherent structure and relationships within data to learn meaningful representations.
This work investigates the robustness of learned representations of self-supervised learning approaches focusing on distribution shifts and image corruptions.
- Score: 5.706184197639971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning in computer vision aims to leverage the inherent
structure and relationships within data to learn meaningful representations
without explicit human annotation, enabling a holistic understanding of visual
scenes. Robustness in vision machine learning ensures reliable and consistent
performance, enhancing generalization, adaptability, and resistance to noise,
variations, and adversarial attacks. Self-supervised paradigms, namely
contrastive learning, knowledge distillation, mutual information maximization,
and clustering, have been considered to have shown advances in invariant
learning representations. This work investigates the robustness of learned
representations of self-supervised learning approaches focusing on distribution
shifts and image corruptions in computer vision. Detailed experiments have been
conducted to study the robustness of self-supervised learning methods on
distribution shifts and image corruptions. The empirical analysis demonstrates
a clear relationship between the performance of learned representations within
self-supervised paradigms and the severity of distribution shifts and
corruptions. Notably, higher levels of shifts and corruptions are found to
significantly diminish the robustness of the learned representations. These
findings highlight the critical impact of distribution shifts and image
corruptions on the performance and resilience of self-supervised learning
methods, emphasizing the need for effective strategies to mitigate their
adverse effects. The study strongly advocates for future research in the field
of self-supervised representation learning to prioritize the key aspects of
safety and robustness in order to ensure practical applicability. The source
code and results are available on GitHub.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Is Self-Supervised Learning More Robust Than Supervised Learning? [29.129681691651637]
Self-supervised contrastive learning is a powerful tool to learn visual representation without labels.
We conduct a series of robustness tests to quantify the behavioral differences between contrastive learning and supervised learning.
Under pre-training corruptions, we find contrastive learning vulnerable to patch shuffling and pixel intensity change, yet less sensitive to dataset-level distribution change.
arXiv Detail & Related papers (2022-06-10T17:58:00Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review [1.4650545418986058]
A promising alternative, self-supervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling.
This literature review aims to provide an up-to-date analysis of the efforts of researchers to understand the key components and the limitations of self-supervised learning.
arXiv Detail & Related papers (2021-06-06T21:59:49Z) - Evaluating the Robustness of Self-Supervised Learning in Medical Imaging [57.20012795524752]
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
arXiv Detail & Related papers (2021-05-14T17:49:52Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Self-Supervised Learning Across Domains [33.86614301708017]
We propose to apply a similar approach to the problem of object recognition across domains.
Our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals on the same images.
This secondary task helps the network to focus on object shapes, learning concepts like spatial orientation and part correlation, while acting as a regularizer for the classification task.
arXiv Detail & Related papers (2020-07-24T06:19:53Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.