Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness
- URL: http://arxiv.org/abs/2207.10899v1
- Date: Fri, 22 Jul 2022 06:30:44 GMT
- Title: Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness
- Authors: Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang
D. Yoo, and In So Kweon
- Abstract summary: Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
We propose a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL)
- Score: 69.39073806630583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial training (AT) for robust representation learning and
self-supervised learning (SSL) for unsupervised representation learning are two
active research fields. Integrating AT into SSL, multiple prior works have
accomplished a highly significant yet challenging task: learning robust
representation without labels. A widely used framework is adversarial
contrastive learning which couples AT and SSL, and thus constitute a very
complex optimization problem. Inspired by the divide-and-conquer philosophy, we
conjecture that it might be simplified as well as improved by solving two
sub-problems: non-robust SSL and pseudo-supervised AT. This motivation shifts
the focus of the task from seeking an optimal integrating strategy for a
coupled problem to finding sub-solutions for sub-problems. With this said, this
work discards prior practices of directly introducing AT to SSL frameworks and
proposed a two-stage framework termed Decoupled Adversarial Contrastive
Learning (DeACL). Extensive experimental results demonstrate that our DeACL
achieves SOTA self-supervised adversarial robustness while significantly
reducing the training time, which validates its effectiveness and efficiency.
Moreover, our DeACL constitutes a more explainable solution, and its success
also bridges the gap with semi-supervised AT for exploiting unlabeled samples
for robust representation learning. The code is publicly accessible at
https://github.com/pantheon5100/DeACL.
Related papers
- ItTakesTwo: Leveraging Peer Representations for Semi-supervised LiDAR Semantic Segmentation [24.743048965822297]
This paper introduces a novel semi-supervised LiDAR semantic segmentation framework called ItTakesTwo (IT2)
IT2 is designed to ensure consistent predictions from peer LiDAR representations, thereby improving the perturbation effectiveness in consistency learning.
Results on public benchmarks show that our approach achieves remarkable improvements over the previous state-of-the-art (SOTA) methods in the field.
arXiv Detail & Related papers (2024-07-09T18:26:53Z) - Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations [35.68752612346952]
The need for abundant labelled data in supervised Adversarial Training (AT) has prompted the use of Self-Supervised Learning (SSL) techniques with AT.
The direct application of existing SSL methods to adversarial training has been sub-optimal due to the increased training complexity of combining SSL with AT.
We propose appropriate attack and defense losses at the feature and projector, alongside a combination of weak and strong augmentations for the teacher and student respectively.
arXiv Detail & Related papers (2024-06-09T14:20:46Z) - BECLR: Batch Enhanced Contrastive Few-Shot Learning [1.450405446885067]
Unsupervised few-shot learning aspires to bridge this gap by discarding the reliance on annotations at training time.
We propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly separable latent representation space.
We then tackle the, somehow overlooked yet critical, issue of sample bias at the few-shot inference stage.
arXiv Detail & Related papers (2024-02-04T10:52:43Z) - Towards End-to-end Semi-supervised Learning for One-stage Object
Detection [88.56917845580594]
This paper focuses on the semi-supervised learning for the advanced and popular one-stage detection network YOLOv5.
We propose a novel teacher-student learning recipe called OneTeacher with two innovative designs, namely Multi-view Pseudo-label Refinement (MPR) and Decoupled Semi-supervised Optimization (DSO)
In particular, MPR improves the quality of pseudo-labels via augmented-view refinement and global-view filtering, and DSO handles the joint optimization conflicts via structure tweaks and task-specific pseudo-labeling.
arXiv Detail & Related papers (2023-02-22T11:35:40Z) - Adversarial Training with Complementary Labels: On the Benefit of
Gradually Informative Attacks [119.38992029332883]
Adversarial training with imperfect supervision is significant but receives limited attention.
We propose a new learning strategy using gradually informative attacks.
Experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets.
arXiv Detail & Related papers (2022-11-01T04:26:45Z) - FUSSL: Fuzzy Uncertain Self Supervised Learning [8.31483061185317]
Self supervised learning (SSL) has become a very successful technique to harness the power of unlabeled data, with no annotation effort.
In this paper, for the first time, we recognize the fundamental limits of SSL coming from the use of a single-supervisory signal.
We propose a robust and general standard hierarchical learning/training protocol for any SSL baseline.
arXiv Detail & Related papers (2022-10-28T01:06:10Z) - Effective Targeted Attacks for Adversarial Self-Supervised Learning [58.14233572578723]
unsupervised adversarial training (AT) has been highlighted as a means of achieving robustness in models without any label information.
We propose a novel positive mining for targeted adversarial attack to generate effective adversaries for adversarial SSL frameworks.
Our method demonstrates significant enhancements in robustness when applied to non-contrastive SSL frameworks, and less but consistent robustness improvements with contrastive SSL frameworks.
arXiv Detail & Related papers (2022-10-19T11:43:39Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Aggregative Self-Supervised Feature Learning from a Limited Sample [12.555160911451688]
We propose two strategies of aggregation in terms of complementarity of various forms to boost the robustness of self-supervised learned features.
Our experiments on 2D natural image and 3D medical image classification tasks under limited data scenarios confirm that the proposed aggregation strategies successfully boost the classification accuracy.
arXiv Detail & Related papers (2020-12-14T12:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.