Introduction to dynamical mean-field theory of generic random neural
networks
- URL: http://arxiv.org/abs/2305.08459v2
- Date: Tue, 16 May 2023 06:36:46 GMT
- Title: Introduction to dynamical mean-field theory of generic random neural
networks
- Authors: Wenxuan Zou and Haiping Huang
- Abstract summary: It is not easy for beginners to access the essence of this tool and the underlying physics.
We give a pedagogical introduction of this method in a particular example of generic random neural networks.
The numerical implementation of solving the integro-differential mean-field equations is also detailed.
- Score: 2.0711789781518752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamical mean-field theory is a powerful physics tool used to analyze the
typical behavior of neural networks, where neurons can be recurrently
connected, or multiple layers of neurons can be stacked. However, it is not
easy for beginners to access the essence of this tool and the underlying
physics. Here, we give a pedagogical introduction of this method in a
particular example of generic random neural networks, where neurons are
randomly and fully connected by correlated synapses and therefore the network
exhibits rich emergent collective dynamics. We also review related past and
recent important works applying this tool. In addition, a physically
transparent and alternative method, namely the dynamical cavity method, is also
introduced to derive exactly the same results. The numerical implementation of
solving the integro-differential mean-field equations is also detailed, with an
illustration of exploring the fluctuation dissipation theorem.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Dynamic neurons: A statistical physics approach for analyzing deep neural networks [1.9662978733004601]
We treat neurons as additional degrees of freedom in interactions, simplifying the structure of deep neural networks.
By utilizing translational symmetry and renormalization group transformations, we can analyze critical phenomena.
This approach may open new avenues for studying deep neural networks using statistical physics.
arXiv Detail & Related papers (2024-10-01T04:39:04Z) - Expressivity of Neural Networks with Random Weights and Learned Biases [44.02417750529102]
Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters.
We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only.
Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights.
arXiv Detail & Related papers (2024-07-01T04:25:49Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Path sampling of recurrent neural networks by incorporating known
physics [0.0]
We show a path sampling approach that allows us to include generic thermodynamic or kinetic constraints into recurrent neural networks.
We show the method here for a widely used type of recurrent neural network known as long short-term memory network.
Our method can be easily generalized to other generative artificial intelligence models and to generic time series in different areas of physical and social sciences.
arXiv Detail & Related papers (2022-03-01T16:35:50Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Formalizing Generalization and Robustness of Neural Networks to Weight
Perturbations [58.731070632586594]
We provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against weight perturbations.
We also design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations.
arXiv Detail & Related papers (2021-03-03T06:17:03Z) - Input-to-State Representation in linear reservoirs dynamics [15.491286626948881]
Reservoir computing is a popular approach to design recurrent neural networks.
The working principle of these networks is not fully understood.
A novel analysis of the dynamics of such networks is proposed.
arXiv Detail & Related papers (2020-03-24T00:14:25Z) - Approximation Bounds for Random Neural Networks and Reservoir Systems [8.143750358586072]
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights.
In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well.
arXiv Detail & Related papers (2020-02-14T09:43:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.