Searching for Robust Neural Architectures via Comprehensive and Reliable
Evaluation
- URL: http://arxiv.org/abs/2203.03128v1
- Date: Mon, 7 Mar 2022 04:45:05 GMT
- Title: Searching for Robust Neural Architectures via Comprehensive and Reliable
Evaluation
- Authors: Jialiang Sun, Tingsong Jiang, Chao Li, Weien Zhou, Xiaoya Zhang, Wen
Yao, Xiaoqian Chen
- Abstract summary: We propose a novel framework, called Auto Adversarial Attack and Defense (AAAD), where we employ neural architecture search methods.
We consider four types of robustness evaluations, including adversarial noise, natural noise, system noise and quantified metrics.
The empirical results on the CIFAR10 dataset show that the searched efficient attack could help find more robust architectures.
- Score: 6.612134996737988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) could help search for robust network
architectures, where defining robustness evaluation metrics is the important
procedure. However, current robustness evaluations in NAS are not sufficiently
comprehensive and reliable. In particular, the common practice only considers
adversarial noise and quantified metrics such as the Jacobian matrix, whereas,
some studies indicated that the models are also vulnerable to other types of
noises such as natural noise. In addition, existing methods taking adversarial
noise as the evaluation just use the robust accuracy of the FGSM or PGD, but
these adversarial attacks could not provide the adequately reliable evaluation,
leading to the vulnerability of the models under stronger attacks. To alleviate
the above problems, we propose a novel framework, called Auto Adversarial
Attack and Defense (AAAD), where we employ neural architecture search methods,
and four types of robustness evaluations are considered, including adversarial
noise, natural noise, system noise and quantified metrics, thereby assisting in
finding more robust architectures. Also, among the adversarial noise, we use
the composite adversarial attack obtained by random search as the new metric to
evaluate the robustness of the model architectures. The empirical results on
the CIFAR10 dataset show that the searched efficient attack could help find
more robust architectures.
Related papers
- Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search [20.258148613490132]
We present a data poisoning attack, when injected to the training data used for architecture search.
We first define the attack objective for crafting poisoning samples that can induce the victim to generate sub-optimal architectures.
We present techniques that the attacker can use to significantly reduce the computational costs of crafting poisoning samples.
arXiv Detail & Related papers (2024-05-09T19:55:07Z) - RobustMQ: Benchmarking Robustness of Quantized Models [54.15661421492865]
Quantization is an essential technique for deploying deep neural networks (DNNs) on devices with limited resources.
We thoroughly evaluated the robustness of quantized models against various noises (adrial attacks, natural corruptions, and systematic noises) on ImageNet.
Our research contributes to advancing the robust quantization of models and their deployment in real-world scenarios.
arXiv Detail & Related papers (2023-08-04T14:37:12Z) - A Comprehensive Study on the Robustness of Image Classification and
Object Detection in Remote Sensing: Surveying and Benchmarking [17.012502610423006]
Deep neural networks (DNNs) have found widespread applications in interpreting remote sensing (RS) imagery.
It has been demonstrated in previous works that DNNs are vulnerable to different types of noises, particularly adversarial noises.
This study represents the first comprehensive examination of both natural robustness and adversarial robustness in RS tasks.
arXiv Detail & Related papers (2023-06-21T08:52:35Z) - Neural Architecture Design and Robustness: A Dataset [11.83842808044211]
We introduce a database on neural architecture design and robustness evaluations.
We evaluate all these networks on a range of common adversarial attacks and corruption types.
We find that carefully crafting the topology of a network can have substantial impact on its robustness.
arXiv Detail & Related papers (2023-06-11T16:02:14Z) - From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework [91.94389491920309]
Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs.
The existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples.
We set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to exploit the advantages of adversarial attacks.
arXiv Detail & Related papers (2023-05-29T14:55:20Z) - Efficient Search of Comprehensively Robust Neural Architectures via
Multi-fidelity Evaluation [1.9100854225243937]
We propose a novel efficient search of comprehensively robust neural architectures via multi-fidelity evaluation (ES-CRNA-ME)
Specifically, we first search for comprehensively robust architectures under multiple types of evaluations using the weight-sharing-based NAS method.
We reduce the number of robustness evaluations by the correlation analysis, which can incorporate similar evaluations and decrease the evaluation cost.
arXiv Detail & Related papers (2023-05-12T08:28:58Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Towards Adversarially Robust Deep Image Denoising [199.2458715635285]
This work systematically investigates the adversarial robustness of deep image denoisers (DIDs)
We propose a novel adversarial attack, namely Observation-based Zero-mean Attack (sc ObsAtk) to craft adversarial zero-mean perturbations on given noisy images.
To robustify DIDs, we propose hybrid adversarial training (sc HAT) that jointly trains DIDs with adversarial and non-adversarial noisy data.
arXiv Detail & Related papers (2022-01-12T10:23:14Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - DSRNA: Differentiable Search of Robust Neural Architectures [11.232234265070753]
In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy.
We propose methods to perform differentiable search of robust neural architectures.
Our methods are more robust to various norm-bound attacks than several robust NAS baselines.
arXiv Detail & Related papers (2020-12-11T04:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.