Backpropagation on Dynamical Networks
- URL: http://arxiv.org/abs/2207.03093v1
- Date: Thu, 7 Jul 2022 05:22:44 GMT
- Title: Backpropagation on Dynamical Networks
- Authors: Eugene Tan, D\'ebora Corr\^ea, Thomas Stemler, Michael Small
- Abstract summary: We propose a network inference method based on the backpropagation through time (BPTT) algorithm commonly used to train recurrent neural networks.
An approximation of local node dynamics is first constructed using a neural network.
Freerun prediction performance with the resulting local models and weights was found to be comparable to the true system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamical networks are versatile models that can describe a variety of
behaviours such as synchronisation and feedback. However, applying these models
in real world contexts is difficult as prior information pertaining to the
connectivity structure or local dynamics is often unknown and must be inferred
from time series observations of network states. Additionally, the influence of
coupling interactions between nodes further complicates the isolation of local
node dynamics. Given the architectural similarities between dynamical networks
and recurrent neural networks (RNN), we propose a network inference method
based on the backpropagation through time (BPTT) algorithm commonly used to
train recurrent neural networks. This method aims to simultaneously infer both
the connectivity structure and local node dynamics purely from observation of
node states. An approximation of local node dynamics is first constructed using
a neural network. This is alternated with an adapted BPTT algorithm to regress
corresponding network weights by minimising prediction errors of the dynamical
network based on the previously constructed local models until convergence is
achieved. This method was found to be succesful in identifying the connectivity
structure for coupled networks of Lorenz, Chua and FitzHugh-Nagumo oscillators.
Freerun prediction performance with the resulting local models and weights was
found to be comparable to the true system with noisy initial conditions. The
method is also extended to non-conventional network couplings such as
asymmetric negative coupling.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Analyzing Populations of Neural Networks via Dynamical Model Embedding [10.455447557943463]
A core challenge in the interpretation of deep neural networks is identifying commonalities between the underlying algorithms implemented by distinct networks trained for the same task.
Motivated by this problem, we introduce DYNAMO, an algorithm that constructs low-dimensional manifold where each point corresponds to a neural network model, and two points are nearby if the corresponding neural networks enact similar high-level computational processes.
DYNAMO takes as input a collection of pre-trained neural networks and outputs a meta-model that emulates the dynamics of the hidden states as well as the outputs of any model in the collection.
arXiv Detail & Related papers (2023-02-27T19:00:05Z) - Verification of Neural-Network Control Systems by Integrating Taylor
Models and Zonotopes [0.0]
We study the verification problem for closed-loop dynamical systems with neural-network controllers (NNCS)
We present an algorithm to chain approaches based on Taylor models and zonotopes, yielding a precise reachability algorithm for NNCS.
arXiv Detail & Related papers (2021-12-16T20:46:39Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
using Feedback-Modulated Delay Loops [0.0]
We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops.
This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals.
The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.
arXiv Detail & Related papers (2020-11-19T21:45:58Z) - Online Estimation and Community Detection of Network Point Processes for
Event Streams [12.211623200731788]
A common goal in network modeling is to uncover the latent community structure present among nodes.
We propose a fast online variational inference algorithm for estimating the latent structure underlying dynamic event arrivals on a network.
We demonstrate that online inference can obtain comparable performance, in terms of community recovery, to non-online variants.
arXiv Detail & Related papers (2020-09-03T15:39:55Z) - Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN [16.362525151483084]
We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
arXiv Detail & Related papers (2020-04-01T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.