Poisoning the Search Space in Neural Architecture Search
- URL: http://arxiv.org/abs/2106.14406v1
- Date: Mon, 28 Jun 2021 05:45:57 GMT
- Title: Poisoning the Search Space in Neural Architecture Search
- Authors: Robert Wu, Nayan Saxena, Rohan Jain
- Abstract summary: We evaluate the robustness of one such algorithm known as Efficient NAS against data poisoning attacks on the original search space.
Our results provide insights into the challenges to surmount in using NAS for more adversarially robust architecture search.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has proven to be a highly effective problem-solving tool for
object detection and image segmentation across various domains such as
healthcare and autonomous driving. At the heart of this performance lies neural
architecture design which relies heavily on domain knowledge and prior
experience on the researchers' behalf. More recently, this process of finding
the most optimal architectures, given an initial search space of possible
operations, was automated by Neural Architecture Search (NAS). In this paper,
we evaluate the robustness of one such algorithm known as Efficient NAS (ENAS)
against data agnostic poisoning attacks on the original search space with
carefully designed ineffective operations. By evaluating algorithm performance
on the CIFAR-10 dataset, we empirically demonstrate how our novel search space
poisoning (SSP) approach and multiple-instance poisoning attacks exploit design
flaws in the ENAS controller to result in inflated prediction error rates for
child networks. Our results provide insights into the challenges to surmount in
using NAS for more adversarially robust architecture search.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Masked Autoencoders Are Robust Neural Architecture Search Learners [14.965550562292476]
We propose a novel NAS framework based on Masked Autoencoders (MAE) that eliminates the need for labeled data during the search process.
By replacing the supervised learning objective with an image reconstruction task, our approach enables the robust discovery of network architectures.
arXiv Detail & Related papers (2023-11-20T13:45:21Z) - Generalizable Lightweight Proxy for Robust NAS against Diverse
Perturbations [59.683234126055694]
Recent neural architecture search (NAS) frameworks have been successful in finding optimal architectures for given conditions.
We propose a novel lightweight robust zero-cost proxy that considers the consistency across features, parameters, and gradients of both clean and perturbed images.
Our approach facilitates an efficient and rapid search for neural architectures capable of learning generalizable features that exhibit robustness across diverse perturbations.
arXiv Detail & Related papers (2023-06-08T08:34:26Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - Towards One Shot Search Space Poisoning in Neural Architecture Search [0.0]
We evaluate the robustness of a Neural Architecture Search (NAS) algorithm known as Efficient NAS (ENAS) against data poisoning attacks on the original search space.
We empirically demonstrate how our one shot search space poisoning approach exploits design flaws in the ENAS controller to degrade predictive performance on classification tasks.
arXiv Detail & Related papers (2021-11-13T16:07:00Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - Making Differentiable Architecture Search less local [9.869449181400466]
Differentiable neural architecture search (DARTS) is a promising NAS approach that dramatically increases search efficiency.
It has been shown to suffer from performance collapse, where the search often leads to detrimental architectures.
We develop a more global optimisation scheme that is able to better explore the space without changing the DARTS problem formulation.
arXiv Detail & Related papers (2021-04-21T10:36:43Z) - Neural Architecture Search of SPD Manifold Networks [79.45110063435617]
We propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks.
We first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design.
We exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search.
arXiv Detail & Related papers (2020-10-27T18:08:57Z) - Breaking the Curse of Space Explosion: Towards Efficient NAS with
Curriculum Search [94.46818035655943]
We propose a curriculum search method that starts from a small search space and gradually incorporates the learned knowledge to guide the search in a large space.
With the proposed search strategy, our Curriculum Neural Architecture Search (CNAS) method significantly improves the search efficiency and finds better architectures than existing NAS methods.
arXiv Detail & Related papers (2020-07-07T02:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.