Towards Robust Neural Networks via Orthogonal Diversity
- URL: http://arxiv.org/abs/2010.12190v5
- Date: Tue, 16 Jan 2024 02:34:11 GMT
- Title: Towards Robust Neural Networks via Orthogonal Diversity
- Authors: Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Jia Cai, Feipeng Cai,
Xiaolin Huang and Jie Yang
- Abstract summary: A series of methods represented by the adversarial training and its variants have proven as one of the most effective techniques in enhancing the Deep Neural Networks robustness.
This paper proposes a novel defense that aims at augmenting the model in order to learn features that are adaptive to diverse inputs, including adversarial examples.
In this way, the proposed DIO augments the model and enhances the robustness of DNN itself as the learned features can be corrected by these mutually-orthogonal paths.
- Score: 30.77473391842894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are vulnerable to invisible perturbations on the
images generated by adversarial attacks, which raises researches on the
adversarial robustness of DNNs. A series of methods represented by the
adversarial training and its variants have proven as one of the most effective
techniques in enhancing the DNN robustness. Generally, adversarial training
focuses on enriching the training data by involving perturbed data. Such data
augmentation effect of the involved perturbed data in adversarial training does
not contribute to the robustness of DNN itself and usually suffers from clean
accuracy drop. Towards the robustness of DNN itself, we in this paper propose a
novel defense that aims at augmenting the model in order to learn features that
are adaptive to diverse inputs, including adversarial examples. More
specifically, to augment the model, multiple paths are embedded into the
network, and an orthogonality constraint is imposed on these paths to guarantee
the diversity among them. A margin-maximization loss is then designed to
further boost such DIversity via Orthogonality (DIO). In this way, the proposed
DIO augments the model and enhances the robustness of DNN itself as the learned
features can be corrected by these mutually-orthogonal paths. Extensive
empirical results on various data sets, structures and attacks verify the
stronger adversarial robustness of the proposed DIO utilizing model
augmentation. Besides, DIO can also be flexibly combined with different data
augmentation techniques (e.g., TRADES and DDPM), further promoting robustness
gains.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Tighter Bounds on the Information Bottleneck with Application to Deep
Learning [6.206127662604578]
Deep Neural Nets (DNNs) learn latent representations induced by their downstream task, objective function, and other parameters.
The Information Bottleneck (IB) provides a hypothetically optimal framework for data modeling, yet it is often intractable.
Recent efforts combined DNNs with the IB by applying VAE-inspired variational methods to approximate bounds on mutual information, resulting in improved robustness to adversarial attacks.
arXiv Detail & Related papers (2024-02-12T13:24:32Z) - Common Knowledge Learning for Generating Transferable Adversarial
Examples [60.1287733223249]
This paper focuses on an important type of black-box attacks, where the adversary generates adversarial examples by a substitute (source) model.
Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures.
We propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples.
arXiv Detail & Related papers (2023-07-01T09:07:12Z) - AccelAT: A Framework for Accelerating the Adversarial Training of Deep
Neural Networks through Accuracy Gradient [12.118084418840152]
Adrial training is exploited to develop a robust Deep Neural Network (DNN) model against malicious altered data.
This paper aims at accelerating the adversarial training to enable fast development of robust DNN models against adversarial attacks.
arXiv Detail & Related papers (2022-10-13T10:31:51Z) - Boosting Adversarial Robustness From The Perspective of Effective Margin
Regularization [58.641705224371876]
The adversarial vulnerability of deep neural networks (DNNs) has been actively investigated in the past several years.
This paper investigates the scale-variant property of cross-entropy loss, which is the most commonly used loss function in classification tasks.
We show that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.
arXiv Detail & Related papers (2022-10-11T03:16:56Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - A Mask-Based Adversarial Defense Scheme [3.759725391906588]
Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs)
We propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
arXiv Detail & Related papers (2022-04-21T12:55:27Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Improving adversarial robustness of deep neural networks by using
semantic information [17.887586209038968]
Adrial training is the main method for improving adversarial robustness and the first line of defense against adversarial attacks.
This paper provides a new perspective on the issue of adversarial robustness, one that shifts the focus from the network as a whole to the critical part of the region close to the decision boundary corresponding to a given class.
Experimental results on the MNIST and CIFAR-10 datasets show that this approach greatly improves adversarial robustness even using a very small dataset from the training data.
arXiv Detail & Related papers (2020-08-18T10:23:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.