Emergence of Network Motifs in Deep Neural Networks
- URL: http://arxiv.org/abs/1912.12244v1
- Date: Fri, 27 Dec 2019 17:05:38 GMT
- Title: Emergence of Network Motifs in Deep Neural Networks
- Authors: Matteo Zambra, Alberto Testolin, Amos Maritan
- Abstract summary: We show that network science tools can be successfully applied to the study of artificial neural networks.
In particular, we study the emergence of network motifs in multi-layer perceptrons.
- Score: 0.35911228556176483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network science can offer fundamental insights into the structural and
functional properties of complex systems. For example, it is widely known that
neuronal circuits tend to organize into basic functional topological modules,
called "network motifs". In this article we show that network science tools can
be successfully applied also to the study of artificial neural networks
operating according to self-organizing (learning) principles. In particular, we
study the emergence of network motifs in multi-layer perceptrons, whose initial
connectivity is defined as a stack of fully-connected, bipartite graphs. Our
simulations show that the final network topology is primarily shaped by
learning dynamics, but can be strongly biased by choosing appropriate weight
initialization schemes. Overall, our results suggest that non-trivial
initialization strategies can make learning more effective by promoting the
development of useful network motifs, which are often surprisingly consistent
with those observed in general transduction networks.
Related papers
- From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks [47.13391046553908]
In artificial networks, the effectiveness of these models relies on their ability to build task specific representation.
Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically.
These solutions capture the evolution of representations and the Neural Kernel across the spectrum from the rich to the lazy regimes.
arXiv Detail & Related papers (2024-09-22T23:19:04Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - The Neural Race Reduction: Dynamics of Abstraction in Gated Networks [12.130628846129973]
We introduce the Gated Deep Linear Network framework that schematizes how pathways of information flow impact learning dynamics.
We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning.
Our work gives rise to general hypotheses relating neural architecture to learning and provides a mathematical approach towards understanding the design of more complex architectures.
arXiv Detail & Related papers (2022-07-21T12:01:03Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Input-to-State Representation in linear reservoirs dynamics [15.491286626948881]
Reservoir computing is a popular approach to design recurrent neural networks.
The working principle of these networks is not fully understood.
A novel analysis of the dynamics of such networks is proposed.
arXiv Detail & Related papers (2020-03-24T00:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.