Enhancing Adversarial Robustness in Low-Label Regime via Adaptively
Weighted Regularization and Knowledge Distillation
- URL: http://arxiv.org/abs/2308.04061v1
- Date: Tue, 8 Aug 2023 05:48:38 GMT
- Title: Enhancing Adversarial Robustness in Low-Label Regime via Adaptively
Weighted Regularization and Knowledge Distillation
- Authors: Dongyoon Yang, Insung Kong, Yongdai Kim
- Abstract summary: We investigate semi-supervised adversarial training where labeled data is scarce.
We develop a semi-supervised adversarial training algorithm that combines the proposed regularization term with knowledge distillation.
Our proposed algorithm achieves state-of-the-art performance with significant margins compared to existing algorithms.
- Score: 1.675857332621569
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness is a research area that has recently received a lot of
attention in the quest for trustworthy artificial intelligence. However, recent
works on adversarial robustness have focused on supervised learning where it is
assumed that labeled data is plentiful. In this paper, we investigate
semi-supervised adversarial training where labeled data is scarce. We derive
two upper bounds for the robust risk and propose a regularization term for
unlabeled data motivated by these two upper bounds. Then, we develop a
semi-supervised adversarial training algorithm that combines the proposed
regularization term with knowledge distillation using a semi-supervised teacher
(i.e., a teacher model trained using a semi-supervised learning algorithm). Our
experiments show that our proposed algorithm achieves state-of-the-art
performance with significant margins compared to existing algorithms. In
particular, compared to supervised learning algorithms, performance of our
proposed algorithm is not much worse even when the amount of labeled data is
very small. For example, our algorithm with only 8\% labeled data is comparable
to supervised adversarial training algorithms that use all labeled data, both
in terms of standard and robust accuracies on CIFAR-10.
Related papers
- One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - Stochastic Differentially Private and Fair Learning [7.971065005161566]
We provide the first differentially private algorithm for fair learning that is guaranteed to converge.
Our framework is flexible enough to permit different fairness, including demographic parity and equalized odds.
Our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes.
arXiv Detail & Related papers (2022-10-17T06:54:57Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z) - Meta-learning with Stochastic Linear Bandits [120.43000970418939]
We consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector.
We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
arXiv Detail & Related papers (2020-05-18T08:41:39Z) - Noise-tolerant, Reliable Active Classification with Comparison Queries [25.725730509014355]
We study the paradigm of active learning, in which algorithms with access to large pools of data may adaptively choose what samples to label.
We provide the first time and query efficient algorithms for learning non-homogeneous linear separators robust to bounded (Massart) noise.
arXiv Detail & Related papers (2020-01-15T19:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.