Architecture Disentanglement for Deep Neural Networks
- URL: http://arxiv.org/abs/2003.13268v2
- Date: Wed, 24 Mar 2021 03:03:54 GMT
- Title: Architecture Disentanglement for Deep Neural Networks
- Authors: Jie Hu, Liujuan Cao, Qixiang Ye, Tong Tong, ShengChuan Zhang, Ke Li,
Feiyue Huang, Rongrong Ji, Ling Shao
- Abstract summary: We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
- Score: 174.16176919145377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the inner workings of deep neural networks (DNNs) is essential
to provide trustworthy artificial intelligence techniques for practical
applications. Existing studies typically involve linking semantic concepts to
units or layers of DNNs, but fail to explain the inference process. In this
paper, we introduce neural architecture disentanglement (NAD) to fill the gap.
Specifically, NAD learns to disentangle a pre-trained DNN into
sub-architectures according to independent tasks, forming information flows
that describe the inference processes. We investigate whether, where, and how
the disentanglement occurs through experiments conducted with handcrafted and
automatically-searched network architectures, on both object-based and
scene-based datasets. Based on the experimental results, we present three new
findings that provide fresh insights into the inner logic of DNNs. First, DNNs
can be divided into sub-architectures for independent tasks. Second, deeper
layers do not always correspond to higher semantics. Third, the connection type
in a DNN affects how the information flows across layers, leading to different
disentanglement behaviors. With NAD, we further explain why DNNs sometimes give
wrong predictions. Experimental results show that misclassified images have a
high probability of being assigned to task sub-architectures similar to the
correct ones. Code will be available at: https://github.com/hujiecpp/NAD.
Related papers
- Two-Phase Dynamics of Interactions Explains the Starting Point of a DNN Learning Over-Fitted Features [68.3512123520931]
We investigate the dynamics of a deep neural network (DNN) learning interactions.
In this paper, we discover the DNN learns interactions in two phases.
The first phase mainly penalizes interactions of medium and high orders, and the second phase mainly learns interactions of gradually increasing orders.
arXiv Detail & Related papers (2024-05-16T17:13:25Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Exploring the Common Principal Subspace of Deep Features in Neural
Networks [50.37178960258464]
We find that different Deep Neural Networks (DNNs) trained with the same dataset share a common principal subspace in latent spaces.
Specifically, we design a new metric $mathcalP$-vector to represent the principal subspace of deep features learned in a DNN.
Small angles (with cosine close to $1.0$) have been found in the comparisons between any two DNNs trained with different algorithms/architectures.
arXiv Detail & Related papers (2021-10-06T15:48:32Z) - Topological Measurement of Deep Neural Networks Using Persistent
Homology [0.7919213739992464]
The inner representation of deep neural networks (DNNs) is indecipherable.
Persistent homology (PH) was employed for investigating the complexities of trained DNNs.
arXiv Detail & Related papers (2021-06-06T03:06:15Z) - exploRNN: Understanding Recurrent Neural Networks through Visual
Exploration [6.006493809079212]
recurrent neural networks (RNNs) are capable of processing sequential data.
We propose exploRNN, the first interactively explorable educational visualization for RNNs.
We provide an overview of the training process of RNNs at a coarse level, while also allowing detailed inspection of the data-flow within LSTM cells.
arXiv Detail & Related papers (2020-12-09T15:06:01Z) - Examining the causal structures of deep neural networks using
information theory [0.0]
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets.
DNNs can also be examined at the level of causation, exploring "what does what" within the layers of the network itself.
Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training.
arXiv Detail & Related papers (2020-10-26T19:53:16Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Anisotropy Directions [63.627760598441796]
We define neural anisotropy directions (NADs) the vectors that encapsulate the directional inductive bias of an architecture.
We show that for the CIFAR-10 dataset, NADs characterize the features used by CNNs to discriminate between different classes.
arXiv Detail & Related papers (2020-06-17T08:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.