Robust Regularization with Adversarial Labelling of Perturbed Samples
- URL: http://arxiv.org/abs/2105.13745v1
- Date: Fri, 28 May 2021 11:26:49 GMT
- Title: Robust Regularization with Adversarial Labelling of Perturbed Samples
- Authors: Xiaohui Guo, Richong Zhang, Yaowei Zheng, Yongyi Mao
- Abstract summary: We propose Adversarial Labelling of Perturbed Samples (ALPS) as a regularization scheme.
ALPS trains neural networks with synthetic samples formed by perturbing each authentic input sample towards another one along with an adversarially assigned label.
Experiments on the SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a state-of-the-art regularization performance.
- Score: 22.37046166576859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent researches have suggested that the predictive accuracy of neural
network may contend with its adversarial robustness. This presents challenges
in designing effective regularization schemes that also provide strong
adversarial robustness. Revisiting Vicinal Risk Minimization (VRM) as a
unifying regularization principle, we propose Adversarial Labelling of
Perturbed Samples (ALPS) as a regularization scheme that aims at improving the
generalization ability and adversarial robustness of the trained model. ALPS
trains neural networks with synthetic samples formed by perturbing each
authentic input sample towards another one along with an adversarially assigned
label. The ALPS regularization objective is formulated as a min-max problem, in
which the outer problem is minimizing an upper-bound of the VRM loss, and the
inner problem is L$_1$-ball constrained adversarial labelling on perturbed
sample. The analytic solution to the induced inner maximization problem is
elegantly derived, which enables computational efficiency. Experiments on the
SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a
state-of-the-art regularization performance while also serving as an effective
adversarial training scheme.
Related papers
- Feature Attenuation of Defective Representation Can Resolve Incomplete Masking on Anomaly Detection [1.0358639819750703]
In unsupervised anomaly detection (UAD) research, it is necessary to develop a computationally efficient and scalable solution.
We revisit the reconstruction-by-inpainting approach and rethink to improve it by analyzing strengths and weaknesses.
We propose Feature Attenuation of Defective Representation (FADeR) that only employs two layers which attenuates feature information of anomaly reconstruction.
arXiv Detail & Related papers (2024-07-05T15:44:53Z) - TSFool: Crafting Highly-Imperceptible Adversarial Time Series through
Multi-Objective Attack [6.698263855886704]
We propose an efficient method called TSFool to craft highly-imperceptible adversarial time series for RNN-based TSC.
The core idea is a new global optimization objective known as "Camouflage Coefficient" that captures the imperceptibility of adversarial samples from the class distribution.
Experiments on 11 UCR and UEA datasets showcase that TSFool significantly outperforms six white-box and three black-box benchmark attacks.
arXiv Detail & Related papers (2022-09-14T03:02:22Z) - {\delta}-SAM: Sharpness-Aware Minimization with Dynamic Reweighting [17.50856935207308]
Adversarial training has shown effectiveness in improving generalization by regularizing the change of loss on top of adversarially chosen perturbations.
The recently proposed sharpness-aware minimization (SAM) algorithm adopts adversarial weight perturbation, encouraging the model to converging to a flat minima.
We propose that dynamically reweighted perturbation within each batch, where unguarded instances are up-weighted, can serve as a better approximation to per-instance perturbation.
arXiv Detail & Related papers (2021-12-16T10:36:35Z) - Simple Adaptive Projection with Pretrained Features for Anomaly
Detection [0.0]
We propose a novel adaptation framework including simple linear transformation and self-attention.
Our simple adaptive projection with pretrained features(SAP2) yields a novel anomaly detection criterion.
arXiv Detail & Related papers (2021-12-05T15:29:59Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.