Theory-Inspired Path-Regularized Differential Network Architecture
Search
- URL: http://arxiv.org/abs/2006.16537v2
- Date: Mon, 12 Oct 2020 12:12:55 GMT
- Title: Theory-Inspired Path-Regularized Differential Network Architecture
Search
- Authors: Pan Zhou, Caiming Xiong, Richard Socher, Steven C.H. Hoi
- Abstract summary: We study the impact of skip connections to fast network optimization and its competitive advantage over other types of operations in differential architecture search (DARTS)
We propose a theory-inspired path-regularized DARTS that consists of two key modules: (i) a differential group-structured sparse binary gate introduced for each operation to avoid unfair competition among operations, and (ii) a path-depth-wise regularization used to incite search exploration for deep architectures that converge slower than shallow ones.
- Score: 206.93821077400733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite its high search efficiency, differential architecture search (DARTS)
often selects network architectures with dominated skip connections which lead
to performance degradation. However, theoretical understandings on this issue
remain absent, hindering the development of more advanced methods in a
principled way. In this work, we solve this problem by theoretically analyzing
the effects of various types of operations, e.g. convolution, skip connection
and zero operation, to the network optimization. We prove that the
architectures with more skip connections can converge faster than the other
candidates, and thus are selected by DARTS. This result, for the first time,
theoretically and explicitly reveals the impact of skip connections to fast
network optimization and its competitive advantage over other types of
operations in DARTS. Then we propose a theory-inspired path-regularized DARTS
that consists of two key modules: (i) a differential group-structured sparse
binary gate introduced for each operation to avoid unfair competition among
operations, and (ii) a path-depth-wise regularization used to incite search
exploration for deep architectures that often converge slower than shallow ones
as shown in our theory and are not well explored during the search.
Experimental results on image classification tasks validate its advantages.
Related papers
- Operation-level Progressive Differentiable Architecture Search [19.214462477848535]
We propose operation-level progressive differentiable neural architecture search (OPP-DARTS) to avoid skip connections aggregation.
Our method's performance on CIFAR-10 is superior to the architecture found by standard DARTS.
arXiv Detail & Related papers (2023-02-11T09:18:01Z) - $\Lambda$-DARTS: Mitigating Performance Collapse by Harmonizing
Operation Selection among Cells [11.777101481512423]
Differentiable neural architecture search (DARTS) is a popular method for neural architecture search (NAS)
We show that DARTS suffers from a specific structural flaw due to its weight-sharing framework that limits the convergence of DARTS to saturation points of the softmax function.
We propose two new regularization terms that aim to prevent performance collapse by harmonizing operation selection via aligning gradients of layers.
arXiv Detail & Related papers (2022-10-14T17:54:01Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Unified Field Theory for Deep and Recurrent Neural Networks [56.735884560668985]
We present a unified and systematic derivation of the mean-field theory for both recurrent and deep networks.
We find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks.
Our method exposes that Gaussian processes are but the lowest order of a systematic expansion in $1/n$.
arXiv Detail & Related papers (2021-12-10T15:06:11Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Landmark Regularization: Ranking Guided Super-Net Training in Neural
Architecture Search [70.57382341642418]
Weight sharing has become a de facto standard in neural architecture search because it enables the search to be done on commodity hardware.
Recent works have empirically shown a ranking disorder between the performance of stand-alone architectures and that of the corresponding shared-weight networks.
We propose a regularization term that aims to maximize the correlation between the performance rankings of the shared-weight network and that of the standalone architectures.
arXiv Detail & Related papers (2021-04-12T09:32:33Z) - Partially-Connected Differentiable Architecture Search for Deepfake and
Spoofing Detection [14.792884010821762]
This paper reports the first successful application of a differentiable architecture search (DARTS) approach to the deepfake and spoofing detection problems.
DARTS operates upon a continuous, differentiable search space which enables both the architecture and parameters to be optimised via gradient descent.
arXiv Detail & Related papers (2021-04-07T13:53:20Z) - Reframing Neural Networks: Deep Structure in Overcomplete
Representations [41.84502123663809]
We introduce deep frame approximation, a unifying framework for representation learning with structured overcomplete frames.
We quantify structural differences with the deep frame potential, a data-independent measure of coherence linked to representation uniqueness and stability.
This connection to the established theory of overcomplete representations suggests promising new directions for principled deep network architecture design.
arXiv Detail & Related papers (2021-03-10T01:15:14Z) - RARTS: An Efficient First-Order Relaxed Architecture Search Method [5.491655566898372]
Differentiable architecture search (DARTS) is an effective method for data-driven neural network design based on solving a bilevel optimization problem.
We formulate a single level alternative and a relaxed architecture search (RARTS) method that utilizes the whole dataset in architecture learning via both data and network splitting.
For the task of searching topological architecture, i.e., the edges and the operations, RARTS obtains a higher accuracy and 60% reduction of computational cost than second-order DARTS on CIFAR-10.
arXiv Detail & Related papers (2020-08-10T04:55:51Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.