AdvRush: Searching for Adversarially Robust Neural Architectures
- URL: http://arxiv.org/abs/2108.01289v1
- Date: Tue, 3 Aug 2021 04:27:33 GMT
- Title: AdvRush: Searching for Adversarially Robust Neural Architectures
- Authors: Jisoo Mok, Byunggook Na, Hyeokjun Choe, Sungroh Yoon
- Abstract summary: We propose AdvRush, a novel adversarial robustness-aware neural architecture search algorithm.
Through a regularizer that favors a candidate architecture with a smoother input loss landscape, AdvRush successfully discovers an adversarially robust neural architecture.
- Score: 17.86463546971522
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks continue to awe the world with their remarkable
performance. Their predictions, however, are prone to be corrupted by
adversarial examples that are imperceptible to humans. Current efforts to
improve the robustness of neural networks against adversarial examples are
focused on developing robust training methods, which update the weights of a
neural network in a more robust direction. In this work, we take a step beyond
training of the weight parameters and consider the problem of designing an
adversarially robust neural architecture with high intrinsic robustness. We
propose AdvRush, a novel adversarial robustness-aware neural architecture
search algorithm, based upon a finding that independent of the training method,
the intrinsic robustness of a neural network can be represented with the
smoothness of its input loss landscape. Through a regularizer that favors a
candidate architecture with a smoother input loss landscape, AdvRush
successfully discovers an adversarially robust neural architecture. Along with
a comprehensive theoretical motivation for AdvRush, we conduct an extensive
amount of experiments to demonstrate the efficacy of AdvRush on various
benchmark datasets. Notably, on CIFAR-10, AdvRush achieves 55.91% robust
accuracy under FGSM attack after standard training and 50.04% robust accuracy
under AutoAttack after 7-step PGD adversarial training.
Related papers
- Enhancing Adversarial Training via Reweighting Optimization Trajectory [72.75558017802788]
A number of approaches have been proposed to address drawbacks such as extra regularization, adversarial weights, and training with more data.
We propose a new method named textbfWeighted Optimization Trajectories (WOT) that leverages the optimization trajectories of adversarial training in time.
Our results show that WOT integrates seamlessly with the existing adversarial training methods and consistently overcomes the robust overfitting issue.
arXiv Detail & Related papers (2023-06-25T15:53:31Z) - Wavelets Beat Monkeys at Adversarial Robustness [0.8702432681310401]
We show how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
Our work shows how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
arXiv Detail & Related papers (2023-04-19T03:41:30Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Differentiable Search of Accurate and Robust Architectures [22.435774101990752]
adversarial training has been drawing increasing attention because of its simplicity and effectiveness.
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks.
We propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training.
arXiv Detail & Related papers (2022-12-28T08:36:36Z) - RobustART: Benchmarking Robustness on Architecture Design and Training
Techniques [170.3297213957074]
Deep neural networks (DNNs) are vulnerable to adversarial noises.
There are no comprehensive studies of how architecture design and training techniques affect robustness.
We propose the first comprehensiveness investigation benchmark on ImageNet.
arXiv Detail & Related papers (2021-09-11T08:01:14Z) - ASAT: Adaptively Scaled Adversarial Training in Time Series [21.65050910881857]
We take the first step to introduce adversarial training in time series analysis by taking the finance field as an example.
We propose the adaptively scaled adversarial training (ASAT) in time series analysis, by treating data at different time slots with time-dependent importance weights.
Experimental results show that the proposed ASAT can improve both the accuracy and the adversarial robustness of neural networks.
arXiv Detail & Related papers (2021-08-20T03:13:34Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.