Data-Free Neural Architecture Search via Recursive Label Calibration
- URL: http://arxiv.org/abs/2112.02086v1
- Date: Fri, 3 Dec 2021 18:53:16 GMT
- Title: Data-Free Neural Architecture Search via Recursive Label Calibration
- Authors: Zechun Liu and Zhiqiang Shen and Yun Long and Eric Xing and Kwang-Ting
Cheng and Chas Leichner
- Abstract summary: This paper aims to explore the feasibility of neural architecture search given only a pre-trained model without using any original training data.
We start by synthesizing usable data through recovering the knowledge from a pre-trained deep neural network.
We instantiate our proposed framework with three popular NAS algorithms: DARTS, ProxylessNAS and SPOS.
- Score: 34.84457903882755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to explore the feasibility of neural architecture search
(NAS) given only a pre-trained model without using any original training data.
This is an important circumstance for privacy protection, bias avoidance, etc.,
in real-world scenarios. To achieve this, we start by synthesizing usable data
through recovering the knowledge from a pre-trained deep neural network. Then
we use the synthesized data and their predicted soft-labels to guide neural
architecture search. We identify that the NAS task requires the synthesized
data (we target at image domain here) with enough semantics, diversity, and a
minimal domain gap from the natural images. For semantics, we propose recursive
label calibration to produce more informative outputs. For diversity, we
propose a regional update strategy to generate more diverse and
semantically-enriched synthetic data. For minimal domain gap, we use input and
feature-level regularization to mimic the original data distribution in latent
space. We instantiate our proposed framework with three popular NAS algorithms:
DARTS, ProxylessNAS and SPOS. Surprisingly, our results demonstrate that the
architectures discovered by searching with our synthetic data achieve accuracy
that is comparable to, or even higher than, architectures discovered by
searching from the original ones, for the first time, deriving the conclusion
that NAS can be done effectively with no need of access to the original or
called natural data if the synthesis method is well designed. Our code will be
publicly available.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - NASiam: Efficient Representation Learning using Neural Architecture
Search for Siamese Networks [76.8112416450677]
Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL)
NASiam is a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair)
NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image classification datasets while costing only a few GPU hours.
arXiv Detail & Related papers (2023-01-31T19:48:37Z) - Synthetic Dataset Generation for Privacy-Preserving Machine Learning [7.489265323050362]
We propose a method to generate secure synthetic datasets from the original private datasets.
We show that our proposed method preserves data-privacy under various privacy-leakage attacks.
arXiv Detail & Related papers (2022-10-06T20:54:52Z) - UnrealNAS: Can We Search Neural Architectures with Unreal Data? [84.78460976605425]
Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs)
Previous work has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest.
We take a further step to question whether real data is necessary for NAS to be effective.
arXiv Detail & Related papers (2022-05-04T16:30:26Z) - Self-Supervised Neural Architecture Search for Imbalanced Datasets [129.3987858787811]
Neural Architecture Search (NAS) provides state-of-the-art results when trained on well-curated datasets with annotated labels.
We propose a NAS-based framework that bears the threefold contributions: (a) we focus on the self-supervised scenario, where no labels are required to determine the architecture, and (b) we assume the datasets are imbalanced.
arXiv Detail & Related papers (2021-09-17T14:56:36Z) - Search to aggregate neighborhood for graph neural network [47.47628113034479]
We propose a framework, which tries to Search to Aggregate NEighborhood (SANE) to automatically design data-specific GNN architectures.
By designing a novel and expressive search space, we propose a differentiable search algorithm, which is more efficient than previous reinforcement learning based methods.
arXiv Detail & Related papers (2021-04-14T03:15:19Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - Direct Federated Neural Architecture Search [0.0]
We present an effective approach for direct federated NAS which is hardware agnostic, computationally lightweight, and a one-stage method to search for ready-to-deploy neural network models.
Our results show an order of magnitude reduction in resource consumption while edging out prior art in accuracy.
arXiv Detail & Related papers (2020-10-13T08:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.