Topological Insights into Sparse Neural Networks
- URL: http://arxiv.org/abs/2006.14085v2
- Date: Sat, 4 Jul 2020 17:11:12 GMT
- Title: Topological Insights into Sparse Neural Networks
- Authors: Shiwei Liu, Tim Van der Lee, Anil Yaman, Zahra Atashgahi, Davide
Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu
- Abstract summary: We introduce an approach to understand and compare sparse neural network topologies from the perspective of graph theory.
We first propose Neural Network Sparse Topology Distance (NNSTD) to measure the distance between different sparse neural networks.
We show that adaptive sparse connectivity can always unveil a plenitude of sparse sub-networks with very different topologies which outperform the dense model.
- Score: 16.515620374178535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse neural networks are effective approaches to reduce the resource
requirements for the deployment of deep neural networks. Recently, the concept
of adaptive sparse connectivity, has emerged to allow training sparse neural
networks from scratch by optimizing the sparse structure during training.
However, comparing different sparse topologies and determining how sparse
topologies evolve during training, especially for the situation in which the
sparse structure optimization is involved, remain as challenging open
questions. This comparison becomes increasingly complex as the number of
possible topological comparisons increases exponentially with the size of
networks. In this work, we introduce an approach to understand and compare
sparse neural network topologies from the perspective of graph theory. We first
propose Neural Network Sparse Topology Distance (NNSTD) to measure the distance
between different sparse neural networks. Further, we demonstrate that sparse
neural networks can outperform over-parameterized models in terms of
performance, even without any further structure optimization. To the end, we
also show that adaptive sparse connectivity can always unveil a plenitude of
sparse sub-networks with very different topologies which outperform the dense
model, by quantifying and comparing their topological evolutionary processes.
The latter findings complement the Lottery Ticket Hypothesis by showing that
there is a much more efficient and robust way to find "winning tickets".
Altogether, our results start enabling a better theoretical understanding of
sparse neural networks, and demonstrate the utility of using graph theory to
analyze them.
Related papers
- Peer-to-Peer Learning Dynamics of Wide Neural Networks [10.179711440042123]
We provide an explicit, non-asymptotic characterization of the learning dynamics of wide neural networks trained using popularDGD algorithms.
We validate our analytical results by accurately predicting error and error and for classification tasks.
arXiv Detail & Related papers (2024-09-23T17:57:58Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - CCasGNN: Collaborative Cascade Prediction Based on Graph Neural Networks [0.49269463638915806]
Cascade prediction aims at modeling information diffusion in the network.
Recent efforts devoted to combining network structure and sequence features by graph neural networks and recurrent neural networks.
We propose a novel method CCasGNN considering the individual profile, structural features, and sequence information.
arXiv Detail & Related papers (2021-12-07T11:37:36Z) - Understanding Convolutional Neural Networks from Theoretical Perspective
via Volterra Convolution [22.058311878382142]
This study explores the relationship between convolutional neural networks and finite Volterra convolutions.
It provides a novel approach to explain and study the overall characteristics of neural networks without being disturbed by the complex network architectures.
arXiv Detail & Related papers (2021-10-19T12:07:46Z) - Persistent Homology Captures the Generalization of Neural Networks
Without A Validation Set [0.0]
We suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology.
Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process.
Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy.
arXiv Detail & Related papers (2021-05-31T09:17:31Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.