Robust Regularization with Adversarial Labelling of Perturbed Samples
- URL: http://arxiv.org/abs/2105.13745v1
- Date: Fri, 28 May 2021 11:26:49 GMT
- Title: Robust Regularization with Adversarial Labelling of Perturbed Samples
- Authors: Xiaohui Guo, Richong Zhang, Yaowei Zheng, Yongyi Mao
- Abstract summary: We propose Adversarial Labelling of Perturbed Samples (ALPS) as a regularization scheme.
ALPS trains neural networks with synthetic samples formed by perturbing each authentic input sample towards another one along with an adversarially assigned label.
Experiments on the SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a state-of-the-art regularization performance.
- Score: 22.37046166576859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent researches have suggested that the predictive accuracy of neural
network may contend with its adversarial robustness. This presents challenges
in designing effective regularization schemes that also provide strong
adversarial robustness. Revisiting Vicinal Risk Minimization (VRM) as a
unifying regularization principle, we propose Adversarial Labelling of
Perturbed Samples (ALPS) as a regularization scheme that aims at improving the
generalization ability and adversarial robustness of the trained model. ALPS
trains neural networks with synthetic samples formed by perturbing each
authentic input sample towards another one along with an adversarially assigned
label. The ALPS regularization objective is formulated as a min-max problem, in
which the outer problem is minimizing an upper-bound of the VRM loss, and the
inner problem is L$_1$-ball constrained adversarial labelling on perturbed
sample. The analytic solution to the induced inner maximization problem is
elegantly derived, which enables computational efficiency. Experiments on the
SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a
state-of-the-art regularization performance while also serving as an effective
adversarial training scheme.
Related papers
- Typicalness-Aware Learning for Failure Detection [26.23185979968123]
Deep neural networks (DNNs) often suffer from the overconfidence issue, where incorrect predictions are made with high confidence scores.
We propose a novel approach called Typicalness-Aware Learning (TAL) to address this issue and improve failure detection performance.
arXiv Detail & Related papers (2024-11-04T11:09:47Z) - Evaluating Model Robustness Using Adaptive Sparse L0 Regularization [5.772716337390152]
adversarial examples challenge existing defenses by altering a minimal subset of features.
Current L0 norm attack methodologies face a trade off between accuracy and efficiency.
This paper proposes a novel, scalable, and effective approach to generate adversarial examples based on the L0 norm.
arXiv Detail & Related papers (2024-08-28T11:02:23Z) - Regularization for Adversarial Robust Learning [18.46110328123008]
We develop a novel approach to adversarial training that integrates $phi$-divergence regularization into the distributionally robust risk function.
This regularization brings a notable improvement in computation compared with the original formulation.
We validate our proposed method in supervised learning, reinforcement learning, and contextual learning and showcase its state-of-the-art performance against various adversarial attacks.
arXiv Detail & Related papers (2024-08-19T03:15:41Z) - Feature Attenuation of Defective Representation Can Resolve Incomplete Masking on Anomaly Detection [1.0358639819750703]
In unsupervised anomaly detection (UAD) research, it is necessary to develop a computationally efficient and scalable solution.
We revisit the reconstruction-by-inpainting approach and rethink to improve it by analyzing strengths and weaknesses.
We propose Feature Attenuation of Defective Representation (FADeR) that only employs two layers which attenuates feature information of anomaly reconstruction.
arXiv Detail & Related papers (2024-07-05T15:44:53Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.