KuraNet: Systems of Coupled Oscillators that Learn to Synchronize
- URL: http://arxiv.org/abs/2105.02838v1
- Date: Thu, 6 May 2021 17:26:33 GMT
- Title: KuraNet: Systems of Coupled Oscillators that Learn to Synchronize
- Authors: Matthew Ricci, Minju Jung, Yuwei Zhang, Mathieu Chalvidal, Aneri Soni,
Thomas Serre
- Abstract summary: "KuraNet" is a deep-learning system of coupled oscillators that can learn to synchronize across a distribution of disordered network conditions.
We show how KuraNet can be used to empirically explore the conditions of global synchrony in analytically impenetrable models with disordered natural frequencies, external field strengths, and interaction delays.
In all cases, we show how KuraNet can generalize to both new data and new network scales, making it easy to work with small systems and form hypotheses about the thermodynamic limit.
- Score: 8.53236289790987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Networks of coupled oscillators are some of the most studied objects in the
theory of dynamical systems. Two important areas of current interest are the
study of synchrony in highly disordered systems and the modeling of systems
with adaptive network structures. Here, we present a single approach to both of
these problems in the form of "KuraNet", a deep-learning-based system of
coupled oscillators that can learn to synchronize across a distribution of
disordered network conditions. The key feature of the model is the replacement
of the traditionally static couplings with a coupling function which can learn
optimal interactions within heterogeneous oscillator populations. We apply our
approach to the eponymous Kuramoto model and demonstrate how KuraNet can learn
data-dependent coupling structures that promote either global or cluster
synchrony. For example, we show how KuraNet can be used to empirically explore
the conditions of global synchrony in analytically impenetrable models with
disordered natural frequencies, external field strengths, and interaction
delays. In a sequence of cluster synchrony experiments, we further show how
KuraNet can function as a data classifier by synchronizing into coherent
assemblies. In all cases, we show how KuraNet can generalize to both new data
and new network scales, making it easy to work with small systems and form
hypotheses about the thermodynamic limit. Our proposed learning-based approach
is broadly applicable to arbitrary dynamical systems with wide-ranging
relevance to modeling in physics and systems biology.
Related papers
- Harnessing omnipresent oscillator networks as computational resource [0.0]
We present a universal framework for harnessing oscillator networks as computational resource.
We force the Kuramoto model by a nonlinear target-system, then after substituting the target-system with a trained feedback-loop it emulates the target-system.
arXiv Detail & Related papers (2025-02-07T10:42:01Z) - On Using The Path Integral Formalism to Interpret Synchronization in Quantum Graph Networks [0.0]
We discuss the concept of using Lagrangian mechanics for systems undergoing synchronization.
By replacing the concept of least action with a least signaling term, we investigate how the path integral representation can be applied to study synchronization dynamics.
arXiv Detail & Related papers (2024-08-03T00:15:31Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - An effective theory of collective deep learning [1.3812010983144802]
We introduce a minimal model that condenses several recent decentralized algorithms.
We derive an effective theory for linear networks to show that the coarse-grained behavior of our system is equivalent to a deformed Ginzburg-Landau model.
We validate the theory in coupled ensembles of realistic neural networks trained on the MNIST dataset.
arXiv Detail & Related papers (2023-10-19T14:58:20Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Networks' modulation: How different structural network properties affect
the global synchronization of coupled Kuramoto oscillators [0.0]
synchronization occurs when different oscillating objects tune their rhythm when they interact with each other.
We study the impact of different network architectures, such as Fully-Connected, Random, Regular ring lattice graph, Small-World and Scale-Free.
Our main finding is, that in Scale-Free and Random networks a sophisticated choice of nodes based on their eigenvector centrality and average shortest path length exhibits a systematic trend in achieving higher degree of synchrony.
arXiv Detail & Related papers (2023-02-24T13:21:58Z) - Bayesian Inference of Stochastic Dynamical Networks [0.0]
This paper presents a novel method for learning network topology and internal dynamics.
It is compared with group sparse Bayesian learning (GSBL), BINGO, kernel-based methods, dynGENIE3, GENIE3 and ARNI.
Our method achieves state-of-the-art performance compared with group sparse Bayesian learning (GSBL), BINGO, kernel-based methods, dynGENIE3, GENIE3 and ARNI.
arXiv Detail & Related papers (2022-06-02T03:22:34Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - Self-Supervised Learning of Generative Spin-Glasses with Normalizing
Flows [0.0]
We develop continuous spin-glass distributions with normalizing flows to model correlations in generic discrete problems.
We demonstrate that key physical and computational properties of the spin-glass phase can be successfully learned.
Remarkably, we observe that the learning itself corresponds to a spin-glass phase transition within the layers of the trained normalizing flows.
arXiv Detail & Related papers (2020-01-02T19:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.