Physical deep learning based on optimal control of dynamical systems
- URL: http://arxiv.org/abs/2012.08761v2
- Date: Thu, 1 Apr 2021 06:43:47 GMT
- Title: Physical deep learning based on optimal control of dynamical systems
- Authors: Genki Furuhata, Tomoaki Niiyama, and Satoshi Sunada
- Abstract summary: In this study, we perform pattern recognition based on the optimal control of continuous-time dynamical systems.
As a key example, we apply the dynamics-based recognition approach to an optoelectronic delay system.
This is in contrast to conventional multilayer neural networks, which require a large number of weight parameters to be trained.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning is the backbone of artificial intelligence technologies, and it
can be regarded as a kind of multilayer feedforward neural network. An essence
of deep learning is information propagation through layers. This suggests that
there is a connection between deep neural networks and dynamical systems in the
sense that information propagation is explicitly modeled by the time-evolution
of dynamical systems. In this study, we perform pattern recognition based on
the optimal control of continuous-time dynamical systems, which is suitable for
physical hardware implementation. The learning is based on the adjoint method
to optimally control dynamical systems, and the deep (virtual) network
structures based on the time evolution of the systems are used for processing
input information. As a key example, we apply the dynamics-based recognition
approach to an optoelectronic delay system and demonstrate that the use of the
delay system allows for image recognition and nonlinear classifications using
only a few control signals. This is in contrast to conventional multilayer
neural networks, which require a large number of weight parameters to be
trained. The proposed approach provides insight into the mechanisms of deep
network processing in the framework of an optimal control problem and presents
a pathway for realizing physical computing hardware.
Related papers
- Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Efficient PAC Learnability of Dynamical Systems Over Multilayer Networks [30.424671907681688]
We study the learnability of dynamical systems over multilayer networks, which are more realistic and challenging.
We present an efficient PAC learning algorithm with provable guarantees to show that the learner only requires a small number of training examples to infer an unknown system.
arXiv Detail & Related papers (2024-05-11T02:35:08Z) - Systematic construction of continuous-time neural networks for linear dynamical systems [0.0]
We discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems.
We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order Ordinary Differential Equation (ODE)
Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system.
arXiv Detail & Related papers (2024-03-24T16:16:41Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Backpropagation-free Training of Deep Physical Neural Networks [0.0]
We propose a simple deep neural network architecture augmented by a biologically plausible learning algorithm, referred to as "model-free forward-forward training"
We show that our method outperforms state-of-the-art hardware-aware training methods by improving training speed, decreasing digital computations, and reducing power consumption in physical systems.
arXiv Detail & Related papers (2023-04-20T14:02:49Z) - On the effectiveness of neural priors in modeling dynamical systems [28.69155113611877]
We discuss the architectural regularization that neural networks offer when learning such systems.
We show that simple coordinate networks with few layers can be used to solve multiple problems in modelling dynamical systems.
arXiv Detail & Related papers (2023-03-10T06:21:24Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Neural Networks with Physics-Informed Architectures and Constraints for
Dynamical Systems Modeling [19.399031618628864]
We develop a framework to learn dynamics models from trajectory data.
We place constraints on the values of the outputs and the internal states of the model.
We experimentally demonstrate the benefits of the proposed approach on a variety of dynamical systems.
arXiv Detail & Related papers (2021-09-14T02:47:51Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.