Neural Architecture Design and Robustness: A Dataset
- URL: http://arxiv.org/abs/2306.06712v1
- Date: Sun, 11 Jun 2023 16:02:14 GMT
- Title: Neural Architecture Design and Robustness: A Dataset
- Authors: Steffen Jung, Jovita Lukasik, Margret Keuper
- Abstract summary: We introduce a database on neural architecture design and robustness evaluations.
We evaluate all these networks on a range of common adversarial attacks and corruption types.
We find that carefully crafting the topology of a network can have substantial impact on its robustness.
- Score: 11.83842808044211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models have proven to be successful in a wide range of machine
learning tasks. Yet, they are often highly sensitive to perturbations on the
input data which can lead to incorrect decisions with high confidence,
hampering their deployment for practical use-cases. Thus, finding architectures
that are (more) robust against perturbations has received much attention in
recent years. Just like the search for well-performing architectures in terms
of clean accuracy, this usually involves a tedious trial-and-error process with
one additional challenge: the evaluation of a network's robustness is
significantly more expensive than its evaluation for clean accuracy. Thus, the
aim of this paper is to facilitate better streamlined research on architectural
design choices with respect to their impact on robustness as well as, for
example, the evaluation of surrogate measures for robustness. We therefore
borrow one of the most commonly considered search spaces for neural
architecture search for image classification, NAS-Bench-201, which contains a
manageable size of 6466 non-isomorphic network designs. We evaluate all these
networks on a range of common adversarial attacks and corruption types and
introduce a database on neural architecture design and robustness evaluations.
We further present three exemplary use cases of this dataset, in which we (i)
benchmark robustness measurements based on Jacobian and Hessian matrices for
their robustness predictability, (ii) perform neural architecture search on
robust accuracies, and (iii) provide an initial analysis of how architectural
design choices affect robustness. We find that carefully crafting the topology
of a network can have substantial impact on its robustness, where networks with
the same parameter count range in mean adversarial robust accuracy from
20%-41%. Code and data is available at http://robustness.vision/.
Related papers
- Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search [20.258148613490132]
We present a data poisoning attack, when injected to the training data used for architecture search.
We first define the attack objective for crafting poisoning samples that can induce the victim to generate sub-optimal architectures.
We present techniques that the attacker can use to significantly reduce the computational costs of crafting poisoning samples.
arXiv Detail & Related papers (2024-05-09T19:55:07Z) - Towards Accurate and Robust Architectures via Neural Architecture Search [3.4014222238829497]
adversarial training improves accuracy and robustness by adjusting the weight connection affiliated to the architecture.
We propose ARNAS to search for accurate and robust architectures for adversarial training.
arXiv Detail & Related papers (2024-05-09T02:16:50Z) - Robust NAS under adversarial training: benchmark, theory, and beyond [55.51199265630444]
We release a comprehensive data set that encompasses both clean accuracy and robust accuracy for a vast array of adversarially trained networks.
We also establish a generalization theory for searching architecture in terms of clean accuracy and robust accuracy under multi-objective adversarial training.
arXiv Detail & Related papers (2024-03-19T20:10:23Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Differentiable Search of Accurate and Robust Architectures [22.435774101990752]
adversarial training has been drawing increasing attention because of its simplicity and effectiveness.
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks.
We propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training.
arXiv Detail & Related papers (2022-12-28T08:36:36Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - DSRNA: Differentiable Search of Robust Neural Architectures [11.232234265070753]
In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy.
We propose methods to perform differentiable search of robust neural architectures.
Our methods are more robust to various norm-bound attacks than several robust NAS baselines.
arXiv Detail & Related papers (2020-12-11T04:52:54Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Dataless Model Selection with the Deep Frame Potential [45.16941644841897]
We quantify networks by their intrinsic capacity for unique and robust representations.
We propose the deep frame potential: a measure of coherence that is approximately related to representation stability but has minimizers that depend only on network structure.
We validate its use as a criterion for model selection and demonstrate correlation with generalization error on a variety of common residual and densely connected network architectures.
arXiv Detail & Related papers (2020-03-30T23:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.