Understanding the wiring evolution in differentiable neural architecture
search
- URL: http://arxiv.org/abs/2009.01272v4
- Date: Thu, 25 Feb 2021 05:44:52 GMT
- Title: Understanding the wiring evolution in differentiable neural architecture
search
- Authors: Sirui Xie, Shoukang Hu, Xinjiang Wang, Chunxiao Liu, Jianping Shi,
Xunying Liu, Dahua Lin
- Abstract summary: Controversy exists on whether differentiable neural architecture search methods discover wiring topology effectively.
We study the underlying mechanism of several existing differentiable NAS frameworks.
- Score: 114.31723873105082
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Controversy exists on whether differentiable neural architecture search
methods discover wiring topology effectively. To understand how wiring topology
evolves, we study the underlying mechanism of several existing differentiable
NAS frameworks. Our investigation is motivated by three observed searching
patterns of differentiable NAS: 1) they search by growing instead of pruning;
2) wider networks are more preferred than deeper ones; 3) no edges are selected
in bi-level optimization. To anatomize these phenomena, we propose a unified
view on searching algorithms of existing frameworks, transferring the global
optimization to local cost minimization. Based on this reformulation, we
conduct empirical and theoretical analyses, revealing implicit inductive biases
in the cost's assignment mechanism and evolution dynamics that cause the
observed phenomena. These biases indicate strong discrimination towards certain
topologies. To this end, we pose questions that future differentiable methods
for neural wiring discovery need to confront, hoping to evoke a discussion and
rethinking on how much bias has been enforced implicitly in existing NAS
methods.
Related papers
- Structure of Artificial Neural Networks -- Empirical Investigations [0.0]
Within one decade, Deep Learning overtook the dominating solution methods of countless problems of artificial intelligence.
With a formal definition for structures of neural networks, neural architecture search problems and solution methods can be formulated under a common framework.
Does structure make a difference or can it be chosen arbitrarily?
arXiv Detail & Related papers (2024-10-12T16:13:28Z) - A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and Recommendations [20.958265043544603]
Modern deep neural networks come with massive model sizes that require significant computational and storage resources.
Researchers have increasingly explored pruning techniques as a popular research direction in neural network compression.
We provide a review of existing research works on deep neural network pruning in a taxonomy of 1) universal/specific speedup, 2) when to prune, 3) how to prune, and 4) fusion of pruning and other compression techniques.
arXiv Detail & Related papers (2023-08-13T13:34:04Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - On the Exploitation of Neuroevolutionary Information: Analyzing the Past
for a More Efficient Future [60.99717891994599]
We propose an approach that extracts information from neuroevolutionary runs, and use it to build a metamodel.
We inspect the best structures found during neuroevolutionary searches of generative adversarial networks with varying characteristics.
arXiv Detail & Related papers (2021-05-26T20:55:29Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - A Survey on Neural Network Interpretability [25.27545364222555]
interpretability is a desired property for deep networks to become powerful tools in other research fields.
We propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus.
arXiv Detail & Related papers (2020-12-28T15:09:50Z) - VINNAS: Variational Inference-based Neural Network Architecture Search [2.685668802278155]
We present a differentiable variational inference-based NAS method for searching sparse convolutional neural networks.
Our method finds diverse network cells, while showing state-of-the-art accuracy with up to almost 2 times fewer non-zero parameters.
arXiv Detail & Related papers (2020-07-12T21:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.