Neural Ordinary Differential Equation Control of Dynamics on Graphs
- URL: http://arxiv.org/abs/2006.09773v5
- Date: Fri, 15 Oct 2021 00:09:21 GMT
- Title: Neural Ordinary Differential Equation Control of Dynamics on Graphs
- Authors: Thomas Asikis, Lucas B\"ottcher and Nino Antulov-Fantulin
- Abstract summary: We study the ability of neural networks to calculate feedback control signals that steer trajectories of continuous time non-linear dynamical systems on graphs.
We present a neural-ODE control (NODEC) framework and find that it can learn feedback control signals that drive graph dynamical systems into desired target states.
- Score: 2.750124853532831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the ability of neural networks to calculate feedback control signals
that steer trajectories of continuous time non-linear dynamical systems on
graphs, which we represent with neural ordinary differential equations (neural
ODEs). To do so, we present a neural-ODE control (NODEC) framework and find
that it can learn feedback control signals that drive graph dynamical systems
into desired target states. While we use loss functions that do not constrain
the control energy, our results show, in accordance with related work, that
NODEC produces low energy control signals. Finally, we evaluate the performance
and versatility of NODEC against well-known feedback controllers and deep
reinforcement learning. We use NODEC to generate feedback controls for systems
of more than one thousand coupled, non-linear ODEs that represent epidemic
processes and coupled oscillators.
Related papers
- Neural Control: Concurrent System Identification and Control Learning with Neural ODE [13.727727205587804]
We propose a neural ODE based method for controlling unknown dynamical systems, denoted as Neural Control (NC)
Our model concurrently learns system dynamics as well as optimal controls that guides towards target states.
Our experiments demonstrate the effectiveness of our model for learning optimal control of unknown dynamical systems.
arXiv Detail & Related papers (2024-01-03T17:05:17Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Graph-Coupled Oscillator Networks [23.597444325599835]
Graph-Coupled Networks (GraphCON) is a novel framework for deep learning on graphs.
We show that our framework offers competitive performance with respect to the state-of-the-art on a variety of graph-based learning tasks.
arXiv Detail & Related papers (2022-02-04T18:29:49Z) - Physics-informed Neural Networks-based Model Predictive Control for
Multi-link Manipulators [0.0]
We discuss nonlinear model predictive control (NMPC) for multi-body dynamics via physics-informed machine learning methods.
We present the idea of enhancing PINNs by adding control actions and initial conditions as additional network inputs.
We present our results using our PINN-based MPC to solve a tracking problem for a complex mechanical system.
arXiv Detail & Related papers (2021-09-22T15:31:24Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Implicit energy regularization of neural ordinary-differential-equation
control [3.5880535198436156]
We present a versatile neural ordinary-differential-equation control (NODEC) framework with implicit energy regularization.
We show that NODEC can steer dynamical systems towards a desired target state within a predefined amount of time.
arXiv Detail & Related papers (2021-03-11T08:28:15Z) - Controlling nonlinear dynamical systems into arbitrary states using
machine learning [77.34726150561087]
We propose a novel and fully data driven control scheme which relies on machine learning (ML)
Exploiting recently developed ML-based prediction capabilities of complex systems, we demonstrate that nonlinear systems can be forced to stay in arbitrary dynamical target states coming from any initial state.
Having this highly flexible control scheme with little demands on the amount of required data on hand, we briefly discuss possible applications that range from engineering to medicine.
arXiv Detail & Related papers (2021-02-23T16:58:26Z) - DyNODE: Neural Ordinary Differential Equations for Dynamics Modeling in
Continuous Control [0.0]
We present a novel approach that captures the underlying dynamics of a system by incorporating control in a neural ordinary differential equation framework.
Results indicate that a simple DyNODE architecture when combined with an actor-critic reinforcement learning algorithm outperforms canonical neural networks.
arXiv Detail & Related papers (2020-09-09T12:56:58Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.