Improving Performance of Semi-Supervised Learning by Adversarial Attacks
- URL: http://arxiv.org/abs/2308.04018v1
- Date: Tue, 8 Aug 2023 03:29:43 GMT
- Title: Improving Performance of Semi-Supervised Learning by Adversarial Attacks
- Authors: Dongyoon Yang, Kunwoong Kim, Yongdai Kim
- Abstract summary: We present a generalized framework, named SCAR, for improving the performance of recent SSL algorithms.
By adversarially attacking pre-trained models with semi-supervision, our framework shows substantial advances in classifying images.
- Score: 1.675857332621569
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning (SSL) algorithm is a setup built upon a realistic
assumption that access to a large amount of labeled data is tough. In this
study, we present a generalized framework, named SCAR, standing for Selecting
Clean samples with Adversarial Robustness, for improving the performance of
recent SSL algorithms. By adversarially attacking pre-trained models with
semi-supervision, our framework shows substantial advances in classifying
images. We introduce how adversarial attacks successfully select high-confident
unlabeled data to be labeled with current predictions. On CIFAR10, three recent
SSL algorithms with SCAR result in significantly improved image classification.
Related papers
- Adversarial Robustness in Zero-Shot Learning:An Empirical Study on Class and Concept-Level Vulnerabilities [38.00830507239569]
Zero-shot Learning (ZSL) aims to enable image classifiers to recognize images from unseen classes that were not included during training.<n>While ZSL models promise to improve generalization and interpretability, their robustness under systematic input perturbations remain unclear.<n>We present an empirical analysis about the robustness of existing ZSL methods at both classlevel and concept-level.
arXiv Detail & Related papers (2025-12-21T08:55:56Z) - DyConfidMatch: Dynamic Thresholding and Re-sampling for 3D Semi-supervised Learning [4.259908158892314]
Semi-supervised learning (SSL) leverages limited labeled and abundant unlabeled data but often faces challenges with data imbalance.
This study investigates class-level confidence as an indicator of learning status in 3D SSL, proposing a novel method that utilizes dynamic thresholding.
A re-sampling strategy is also introduced to mitigate bias towards well-represented classes, ensuring equitable class representation.
arXiv Detail & Related papers (2024-11-13T05:09:28Z) - FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR Image Classification [10.911464455072391]
FACTUAL is a Contrastive Learning framework for Adversarial Training and robust SAR classification.
Our model achieves 99.7% accuracy on clean samples, and 89.6% on perturbed samples, both outperforming previous state-of-the-art methods.
arXiv Detail & Related papers (2024-04-04T06:20:22Z) - Improving Representation Learning for Histopathologic Images with
Cluster Constraints [31.426157660880673]
Self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative.
We introduce an SSL framework for transferable representation learning and semantically meaningful clustering.
Our approach outperforms common SSL methods in downstream classification and clustering tasks.
arXiv Detail & Related papers (2023-10-18T21:20:44Z) - Effective Targeted Attacks for Adversarial Self-Supervised Learning [58.14233572578723]
unsupervised adversarial training (AT) has been highlighted as a means of achieving robustness in models without any label information.
We propose a novel positive mining for targeted adversarial attack to generate effective adversaries for adversarial SSL frameworks.
Our method demonstrates significant enhancements in robustness when applied to non-contrastive SSL frameworks, and less but consistent robustness improvements with contrastive SSL frameworks.
arXiv Detail & Related papers (2022-10-19T11:43:39Z) - Class-Level Confidence Based 3D Semi-Supervised Learning [18.95161296147023]
We show that unlabeled data class-level confidence can represent the learning status in the 3D imbalanced dataset.
Our method significantly outperforms state-of-the-art counterparts for both 3D SSL classification and detection tasks.
arXiv Detail & Related papers (2022-10-18T20:13:28Z) - Improving Self-Supervised Learning by Characterizing Idealized
Representations [155.1457170539049]
We prove necessary and sufficient conditions for any task invariant to given data augmentations.
For contrastive learning, our framework prescribes simple but significant improvements to previous methods.
For non-contrastive learning, we use our framework to derive a simple and novel objective.
arXiv Detail & Related papers (2022-09-13T18:01:03Z) - Self Supervised Learning for Few Shot Hyperspectral Image Classification [57.2348804884321]
We propose to leverage Self Supervised Learning (SSL) for HSI classification.
We show that by pre-training an encoder on unlabeled pixels using Barlow-Twins, a state-of-the-art SSL algorithm, we can obtain accurate models with a handful of labels.
arXiv Detail & Related papers (2022-06-24T07:21:53Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained
Classification [38.68079253627819]
Our benchmark consists of two fine-grained classification datasets obtained by sampling classes from the Aves and Fungi taxonomy.
We find that recently proposed SSL methods provide significant benefits, and can effectively use out-of-class data to improve performance when deep networks are trained from scratch.
Our work suggests that semi-supervised learning with experts on realistic datasets may require different strategies than those currently prevalent in the literature.
arXiv Detail & Related papers (2021-04-01T17:59:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.