D-DARTS: Distributed Differentiable Architecture Search
- URL: http://arxiv.org/abs/2108.09306v1
- Date: Fri, 20 Aug 2021 09:07:01 GMT
- Title: D-DARTS: Distributed Differentiable Architecture Search
- Authors: Alexandre Heuillet, Hedi Tabia, Hichem Arioui, Kamal Youcef-Toumi
- Abstract summary: Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
- Score: 75.12821786565318
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural
Architecture Search (NAS) methods, drastically reducing search cost by
resorting to Stochastic Gradient Descent (SGD) and weight-sharing. However, it
also greatly reduces the search space, thus excluding potential promising
architectures from being discovered. In this paper, we propose D-DARTS, a novel
solution that addresses this problem by nesting several neural networks at
cell-level instead of using weight-sharing to produce more diversified and
specialized architectures. Moreover, we introduce a novel algorithm which can
derive deeper architectures from a few trained cells, increasing performance
and saving computation time. Our solution is able to provide state-of-the-art
results on CIFAR-10, CIFAR-100 and ImageNet while using significantly less
parameters than previous baselines, resulting in more hardware-efficient neural
networks.
Related papers
- The devil is in discretization discrepancy. Robustifying Differentiable NAS with Single-Stage Searching Protocol [2.4300749758571905]
gradient-based methods suffer from the discretization error, which can severely damage the process of obtaining the final architecture.
We introduce a novel single-stage searching protocol, which is not reliant on decoding a continuous architecture.
Our results demonstrate that this approach outperforms other DNAS methods by achieving 75.3% in the searching stage on the Cityscapes validation dataset.
arXiv Detail & Related papers (2024-05-26T15:44:53Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Making Differentiable Architecture Search less local [9.869449181400466]
Differentiable neural architecture search (DARTS) is a promising NAS approach that dramatically increases search efficiency.
It has been shown to suffer from performance collapse, where the search often leads to detrimental architectures.
We develop a more global optimisation scheme that is able to better explore the space without changing the DARTS problem formulation.
arXiv Detail & Related papers (2021-04-21T10:36:43Z) - Partially-Connected Differentiable Architecture Search for Deepfake and
Spoofing Detection [14.792884010821762]
This paper reports the first successful application of a differentiable architecture search (DARTS) approach to the deepfake and spoofing detection problems.
DARTS operates upon a continuous, differentiable search space which enables both the architecture and parameters to be optimised via gradient descent.
arXiv Detail & Related papers (2021-04-07T13:53:20Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - Disentangled Neural Architecture Search [7.228790381070109]
We propose disentangled neural architecture search (DNAS) which disentangles the hidden representation of the controller into semantically meaningful concepts.
DNAS successfully disentangles the architecture representations, including operation selection, skip connections, and number of layers.
Dense-sampling leads to neural architecture search with higher efficiency and better performance.
arXiv Detail & Related papers (2020-09-24T03:35:41Z) - Multi-Objective Neural Architecture Search Based on Diverse Structures
and Adaptive Recommendation [4.595675084986132]
The search space of neural architecture search (NAS) for convolutional neural network (CNN) is huge.
We propose MoARR algorithm, which utilizes the existing research results and historical information to quickly find architectures that are both lightweight and accurate.
Experimental results show that our MoARR can achieve a powerful and lightweight model (with 1.9% error rate and 2.3M parameters) on CIFAR-10 in 6 GPU hours.
arXiv Detail & Related papers (2020-07-06T13:42:33Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - Multi-fidelity Neural Architecture Search with Knowledge Distillation [69.09782590880367]
We propose a bayesian multi-fidelity method for neural architecture search: MF-KD.
Knowledge distillation adds to a loss function a term forcing a network to mimic some teacher network.
We show that training for a few epochs with such a modified loss function leads to a better selection of neural architectures than training for a few epochs with a logistic loss.
arXiv Detail & Related papers (2020-06-15T12:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.