DARTS for Inverse Problems: a Study on Hyperparameter Sensitivity
- URL: http://arxiv.org/abs/2108.05647v1
- Date: Thu, 12 Aug 2021 10:28:02 GMT
- Title: DARTS for Inverse Problems: a Study on Hyperparameter Sensitivity
- Authors: Jonas Geiping, Jovita Lukasik, Margret Keuper, Michael Moeller
- Abstract summary: Differentiable architecture search (DARTS) is a widely researched tool for neural architecture search.
We report the results of any DARTS-based methods from several runs along with its underlying performance statistics.
- Score: 21.263326724329698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentiable architecture search (DARTS) is a widely researched tool for
neural architecture search, due to its promising results for image
classification. The main benefit of DARTS is the effectiveness achieved through
the weight-sharing one-shot paradigm, which allows efficient architecture
search. In this work, we investigate DARTS in a systematic case study of
inverse problems, which allows us to analyze these potential benefits in a
controlled manner. Although we demonstrate that the success of DARTS can be
extended from image classification to reconstruction, our experiments yield
three fundamental difficulties in the evaluation of DARTS-based methods: First,
the results show a large variance in all test cases. Second, the final
performance is highly dependent on the hyperparameters of the optimizer. And
third, the performance of the weight-sharing architecture used during training
does not reflect the final performance of the found architecture well. Thus, we
conclude the necessity to 1) report the results of any DARTS-based methods from
several runs along with its underlying performance statistics, 2) show the
correlation of the training and final architecture performance, and 3)
carefully consider if the computational efficiency of DARTS outweighs the costs
of hyperparameter optimization and multiple runs.
Related papers
- OStr-DARTS: Differentiable Neural Architecture Search based on Operation Strength [70.76342136866413]
Differentiable architecture search (DARTS) has emerged as a promising technique for effective neural architecture search.
DARTS suffers from the well-known degeneration issue which can lead to deteriorating architectures.
We propose a novel criterion based on operation strength that estimates the importance of an operation by its effect on the final loss.
arXiv Detail & Related papers (2024-09-22T13:16:07Z) - Efficient Architecture Search via Bi-level Data Pruning [70.29970746807882]
This work pioneers an exploration into the critical role of dataset characteristics for DARTS bi-level optimization.
We introduce a new progressive data pruning strategy that utilizes supernet prediction dynamics as the metric.
Comprehensive evaluations on the NAS-Bench-201 search space, DARTS search space, and MobileNet-like search space validate that BDP reduces search costs by over 50%.
arXiv Detail & Related papers (2023-12-21T02:48:44Z) - Operation-level Progressive Differentiable Architecture Search [19.214462477848535]
We propose operation-level progressive differentiable neural architecture search (OPP-DARTS) to avoid skip connections aggregation.
Our method's performance on CIFAR-10 is superior to the architecture found by standard DARTS.
arXiv Detail & Related papers (2023-02-11T09:18:01Z) - $\Lambda$-DARTS: Mitigating Performance Collapse by Harmonizing
Operation Selection among Cells [11.777101481512423]
Differentiable neural architecture search (DARTS) is a popular method for neural architecture search (NAS)
We show that DARTS suffers from a specific structural flaw due to its weight-sharing framework that limits the convergence of DARTS to saturation points of the softmax function.
We propose two new regularization terms that aim to prevent performance collapse by harmonizing operation selection via aligning gradients of layers.
arXiv Detail & Related papers (2022-10-14T17:54:01Z) - ZARTS: On Zero-order Optimization for Neural Architecture Search [94.41017048659664]
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency.
This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation.
In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue.
arXiv Detail & Related papers (2021-10-10T09:35:15Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - RARTS: An Efficient First-Order Relaxed Architecture Search Method [5.491655566898372]
Differentiable architecture search (DARTS) is an effective method for data-driven neural network design based on solving a bilevel optimization problem.
We formulate a single level alternative and a relaxed architecture search (RARTS) method that utilizes the whole dataset in architecture learning via both data and network splitting.
For the task of searching topological architecture, i.e., the edges and the operations, RARTS obtains a higher accuracy and 60% reduction of computational cost than second-order DARTS on CIFAR-10.
arXiv Detail & Related papers (2020-08-10T04:55:51Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.