Harnessing omnipresent oscillator networks as computational resource
- URL: http://arxiv.org/abs/2502.04818v2
- Date: Fri, 21 Feb 2025 15:15:21 GMT
- Title: Harnessing omnipresent oscillator networks as computational resource
- Authors: Thomas Geert de Jong, Hirofumi Notsu, Kohei Nakajima,
- Abstract summary: We present a universal framework for harnessing oscillator networks as computational resource.<n>We force the Kuramoto model by a nonlinear target-system, then after substituting the target-system with a trained feedback-loop it emulates the target-system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nature is pervaded with oscillatory behavior. In networks of coupled oscillators patterns can arise when the system synchronizes to an external input. Hence, these networks provide processing and memory of input. We present a universal framework for harnessing oscillator networks as computational resource. This reservoir computing framework is introduced by the ubiquitous model for phase-locking, the Kuramoto model. We force the Kuramoto model by a nonlinear target-system, then after substituting the target-system with a trained feedback-loop it emulates the target-system. Our results are two-fold. Firstly, the trained network inherits performance properties of the Kuramoto model, where all-to-all coupling is performed in linear time with respect to the number of nodes and parameters for synchronization are abundant. Secondly, the learning capabilities of the oscillator network can be explained using Kuramoto model's order parameter. This work provides the foundation for utilizing nature's oscillator networks as a new class of information processing systems.
Related papers
- Dissipation-driven quantum generative adversarial networks [11.833077116494929]
We introduce a novel dissipation-driven quantum generative adversarial network (DQGAN) architecture specifically tailored for generating classical data.
The classical data is encoded into the input qubits of the input layer via strong tailored dissipation processes.
We extract both the generated data and the classification results by measuring the observables of the steady state of the output qubits.
arXiv Detail & Related papers (2024-08-28T07:41:58Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Flow Functions from Data with Applications to Nonlinear
Oscillators [0.0]
We show that learning the flow function is equivalent to learning the input-to-state map of a discrete-time dynamical system.
This motivates the use of an RNN together with encoder and decoder networks which map the state of the system to the hidden state of the RNN and back.
arXiv Detail & Related papers (2023-03-29T13:04:04Z) - Parallel Hybrid Networks: an interplay between quantum and classical
neural networks [0.0]
We introduce a new, interpretable class of hybrid quantum neural networks that pass the inputs of the dataset in parallel.
We demonstrate this claim on two synthetic datasets sampled from periodic distributions with added protrusions as noise.
arXiv Detail & Related papers (2023-03-06T15:45:28Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Discovering dynamical features of Hodgkin-Huxley-type model of
physiological neuron using artificial neural network [0.0]
We consider Hodgkin-Huxley-type system with two fast and one slow variables.
For these two systems we create artificial neural networks that are able to reproduce their dynamics.
For the bistable model it means that the network being trained only on one brunch of the solutions recovers another without seeing it during the training.
arXiv Detail & Related papers (2022-03-26T19:04:19Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Parallel Machine Learning for Forecasting the Dynamics of Complex
Networks [0.0]
We present a machine learning scheme for forecasting the dynamics of large complex networks.
We use a parallel architecture that mimics the topology of the network of interest.
arXiv Detail & Related papers (2021-08-27T06:06:41Z) - KuraNet: Systems of Coupled Oscillators that Learn to Synchronize [8.53236289790987]
"KuraNet" is a deep-learning system of coupled oscillators that can learn to synchronize across a distribution of disordered network conditions.
We show how KuraNet can be used to empirically explore the conditions of global synchrony in analytically impenetrable models with disordered natural frequencies, external field strengths, and interaction delays.
In all cases, we show how KuraNet can generalize to both new data and new network scales, making it easy to work with small systems and form hypotheses about the thermodynamic limit.
arXiv Detail & Related papers (2021-05-06T17:26:33Z) - Unsupervised Learning for Asynchronous Resource Allocation in Ad-hoc
Wireless Networks [122.42812336946756]
We design an unsupervised learning method based on Aggregation Graph Neural Networks (Agg-GNNs)
We capture the asynchrony by modeling the activation pattern as a characteristic of each node and train a policy-based resource allocation method.
arXiv Detail & Related papers (2020-11-05T03:38:36Z) - Machine Learning Link Inference of Noisy Delay-coupled Networks with
Opto-Electronic Experimental Tests [1.0766846340954257]
We devise a machine learning technique to solve the general problem of inferring network links that have time-delays.
We first train a type of machine learning system known as reservoir computing to mimic the dynamics of the unknown network.
We formulate and test a technique that uses the trained parameters of the reservoir system output layer to deduce an estimate of the unknown network structure.
arXiv Detail & Related papers (2020-10-29T00:24:13Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.