Adversarial Training with Complementary Labels: On the Benefit of
Gradually Informative Attacks
- URL: http://arxiv.org/abs/2211.00269v1
- Date: Tue, 1 Nov 2022 04:26:45 GMT
- Title: Adversarial Training with Complementary Labels: On the Benefit of
Gradually Informative Attacks
- Authors: Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo
Han, Masashi Sugiyama
- Abstract summary: Adversarial training with imperfect supervision is significant but receives limited attention.
We propose a new learning strategy using gradually informative attacks.
Experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets.
- Score: 119.38992029332883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training (AT) with imperfect supervision is significant but
receives limited attention. To push AT towards more practical scenarios, we
explore a brand new yet challenging setting, i.e., AT with complementary labels
(CLs), which specify a class that a data sample does not belong to. However,
the direct combination of AT with existing methods for CLs results in
consistent failure, but not on a simple baseline of two-stage training. In this
paper, we further explore the phenomenon and identify the underlying challenges
of AT with CLs as intractable adversarial optimization and low-quality
adversarial examples. To address the above problems, we propose a new learning
strategy using gradually informative attacks, which consists of two critical
components: 1) Warm-up Attack (Warm-up) gently raises the adversarial
perturbation budgets to ease the adversarial optimization with CLs; 2)
Pseudo-Label Attack (PLA) incorporates the progressively informative model
predictions into a corrected complementary loss. Extensive experiments are
conducted to demonstrate the effectiveness of our method on a range of
benchmarked datasets. The code is publicly available at:
https://github.com/RoyalSkye/ATCL.
Related papers
- A Comprehensive Study of Privacy Risks in Curriculum Learning [25.57099711643689]
Training a machine learning model with data following a meaningful order has been proven to be effective in accelerating the training process.
The key enabling technique is curriculum learning (CL), which has seen great success and has been deployed in areas like image and text classification.
Yet, how CL affects the privacy of machine learning is unclear.
arXiv Detail & Related papers (2023-10-16T07:06:38Z) - Outlier Robust Adversarial Training [57.06824365801612]
We introduce Outlier Robust Adversarial Training (ORAT) in this work.
ORAT is based on a bi-level optimization formulation of adversarial training with a robust rank-based loss function.
We show that the learning objective of ORAT satisfies the $mathcalH$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss.
arXiv Detail & Related papers (2023-09-10T21:36:38Z) - One Class One Click: Quasi Scene-level Weakly Supervised Point Cloud
Semantic Segmentation with Active Learning [29.493759008637532]
We introduce One Class One Click (OCOC), a low cost yet informative quasi scene-level label, which encapsulates point-level and scene-level annotations.
An active weakly supervised framework is proposed to leverage scarce labels by involving weak supervision from global and local perspectives.
It considerably outperforms genuine scene-level weakly supervised methods by up to 25% in terms of average F1 score.
arXiv Detail & Related papers (2022-11-23T01:23:26Z) - Effective Targeted Attacks for Adversarial Self-Supervised Learning [58.14233572578723]
unsupervised adversarial training (AT) has been highlighted as a means of achieving robustness in models without any label information.
We propose a novel positive mining for targeted adversarial attack to generate effective adversaries for adversarial SSL frameworks.
Our method demonstrates significant enhancements in robustness when applied to non-contrastive SSL frameworks, and less but consistent robustness improvements with contrastive SSL frameworks.
arXiv Detail & Related papers (2022-10-19T11:43:39Z) - Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness [69.39073806630583]
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
We propose a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL)
arXiv Detail & Related papers (2022-07-22T06:30:44Z) - Adversarial Contrastive Learning via Asymmetric InfoNCE [64.42740292752069]
We propose to treat adversarial samples unequally when contrasted with an asymmetric InfoNCE objective.
In the asymmetric fashion, the adverse impacts of conflicting objectives between CL and adversarial learning can be effectively mitigated.
Experiments show that our approach consistently outperforms existing Adversarial CL methods.
arXiv Detail & Related papers (2022-07-18T04:14:36Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.