Machine Learning Link Inference of Noisy Delay-coupled Networks with
Opto-Electronic Experimental Tests
- URL: http://arxiv.org/abs/2010.15289v3
- Date: Fri, 14 May 2021 14:26:15 GMT
- Title: Machine Learning Link Inference of Noisy Delay-coupled Networks with
Opto-Electronic Experimental Tests
- Authors: Amitava Banerjee, Joseph D. Hart, Rajarshi Roy, Edward Ott
- Abstract summary: We devise a machine learning technique to solve the general problem of inferring network links that have time-delays.
We first train a type of machine learning system known as reservoir computing to mimic the dynamics of the unknown network.
We formulate and test a technique that uses the trained parameters of the reservoir system output layer to deduce an estimate of the unknown network structure.
- Score: 1.0766846340954257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We devise a machine learning technique to solve the general problem of
inferring network links that have time-delays. The goal is to do this purely
from time-series data of the network nodal states. This task has applications
in fields ranging from applied physics and engineering to neuroscience and
biology. To achieve this, we first train a type of machine learning system
known as reservoir computing to mimic the dynamics of the unknown network. We
formulate and test a technique that uses the trained parameters of the
reservoir system output layer to deduce an estimate of the unknown network
structure. Our technique, by its nature, is non-invasive, but is motivated by
the widely-used invasive network inference method whereby the responses to
active perturbations applied to the network are observed and employed to infer
network links (e.g., knocking down genes to infer gene regulatory networks). We
test this technique on experimental and simulated data from delay-coupled
opto-electronic oscillator networks. We show that the technique often yields
very good results particularly if the system does not exhibit synchrony. We
also find that the presence of dynamical noise can strikingly enhance the
accuracy and ability of our technique, especially in networks that exhibit
synchrony.
Related papers
- Dissipation-driven quantum generative adversarial networks [11.833077116494929]
We introduce a novel dissipation-driven quantum generative adversarial network (DQGAN) architecture specifically tailored for generating classical data.
The classical data is encoded into the input qubits of the input layer via strong tailored dissipation processes.
We extract both the generated data and the classification results by measuring the observables of the steady state of the output qubits.
arXiv Detail & Related papers (2024-08-28T07:41:58Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Network Anomaly Detection Using Federated Learning [0.483420384410068]
We introduce a robust and scalable framework that enables efficient network anomaly detection.
We leverage federated learning, in which multiple participants train a global model jointly.
The proposed method performs better than baseline machine learning techniques on the UNSW-NB15 data set.
arXiv Detail & Related papers (2023-03-13T20:16:30Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Parallel Machine Learning for Forecasting the Dynamics of Complex
Networks [0.0]
We present a machine learning scheme for forecasting the dynamics of large complex networks.
We use a parallel architecture that mimics the topology of the network of interest.
arXiv Detail & Related papers (2021-08-27T06:06:41Z) - Reservoir Stack Machines [77.12475691708838]
Memory-augmented neural networks equip a recurrent neural network with an explicit memory to support tasks that require information storage.
We introduce the reservoir stack machine, a model which can provably recognize all deterministic context-free languages.
Our results show that the reservoir stack machine achieves zero error, even on test sequences longer than the training data.
arXiv Detail & Related papers (2021-05-04T16:50:40Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Physical deep learning based on optimal control of dynamical systems [0.0]
In this study, we perform pattern recognition based on the optimal control of continuous-time dynamical systems.
As a key example, we apply the dynamics-based recognition approach to an optoelectronic delay system.
This is in contrast to conventional multilayer neural networks, which require a large number of weight parameters to be trained.
arXiv Detail & Related papers (2020-12-16T06:38:01Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.