Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust
Deep Neural Architectures
- URL: http://arxiv.org/abs/2207.05321v1
- Date: Tue, 12 Jul 2022 05:26:09 GMT
- Title: Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust
Deep Neural Architectures
- Authors: Jia Liu, Ran Cheng, Yaochu Jin
- Abstract summary: This paper proposes a bi-fidelity multiobjective neural architecture search approach.
In addition to a low-fidelity performance predictor, we leverage an auxiliary-objective -- the value of which is the output of a surrogate model trained with high-fidelity evaluations.
The effectiveness of the proposed approach is confirmed by extensive experiments conducted on CIFAR-10, CIFAR-100 and SVHN datasets.
- Score: 19.173285459139592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been found vulnerable to adversarial attacks, thus
raising potentially concerns in security-sensitive contexts. To address this
problem, recent research has investigated the adversarial robustness of deep
neural networks from the architectural point of view. However, searching for
architectures of deep neural networks is computationally expensive,
particularly when coupled with adversarial training process. To meet the above
challenge, this paper proposes a bi-fidelity multiobjective neural architecture
search approach. First, we formulate the NAS problem for enhancing adversarial
robustness of deep neural networks into a multiobjective optimization problem.
Specifically, in addition to a low-fidelity performance predictor as the first
objective, we leverage an auxiliary-objective -- the value of which is the
output of a surrogate model trained with high-fidelity evaluations. Secondly,
we reduce the computational cost by combining three performance estimation
methods, i.e., parameter sharing, low-fidelity evaluation, and surrogate-based
predictor. The effectiveness of the proposed approach is confirmed by extensive
experiments conducted on CIFAR-10, CIFAR-100 and SVHN datasets.
Related papers
- Compositional Curvature Bounds for Deep Neural Networks [7.373617024876726]
A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks.
We study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations.
We introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks.
arXiv Detail & Related papers (2024-06-07T17:50:15Z) - Efficient Search of Comprehensively Robust Neural Architectures via
Multi-fidelity Evaluation [1.9100854225243937]
We propose a novel efficient search of comprehensively robust neural architectures via multi-fidelity evaluation (ES-CRNA-ME)
Specifically, we first search for comprehensively robust architectures under multiple types of evaluations using the weight-sharing-based NAS method.
We reduce the number of robustness evaluations by the correlation analysis, which can incorporate similar evaluations and decrease the evaluation cost.
arXiv Detail & Related papers (2023-05-12T08:28:58Z) - Differentiable Search of Accurate and Robust Architectures [22.435774101990752]
adversarial training has been drawing increasing attention because of its simplicity and effectiveness.
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks.
We propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training.
arXiv Detail & Related papers (2022-12-28T08:36:36Z) - On the Minimal Adversarial Perturbation for Deep Neural Networks with
Provable Estimation Error [65.51757376525798]
The existence of adversarial perturbations has opened an interesting research line on provable robustness.
No provable results have been presented to estimate and bound the error committed.
This paper proposes two lightweight strategies to find the minimal adversarial perturbation.
The obtained results show that the proposed strategies approximate the theoretical distance and robustness for samples close to the classification, leading to provable guarantees against any adversarial attacks.
arXiv Detail & Related papers (2022-01-04T16:40:03Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.