Forced Variational Integrator Networks for Prediction and Control of
Mechanical Systems
- URL: http://arxiv.org/abs/2106.02973v1
- Date: Sat, 5 Jun 2021 21:39:09 GMT
- Title: Forced Variational Integrator Networks for Prediction and Control of
Mechanical Systems
- Authors: Aaron Havens and Girish Chowdhary
- Abstract summary: We show that forced variational integrator networks (FVIN) architecture allows us to accurately account for energy dissipation and external forcing.
This can result in highly-data efficient model-based control and can predict on real non-conservative systems.
- Score: 7.538482310185133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep learning becomes more prevalent for prediction and control of real
physical systems, it is important that these overparameterized models are
consistent with physically plausible dynamics. This elicits a problem with how
much inductive bias to impose on the model through known physical parameters
and principles to reduce complexity of the learning problem to give us more
reliable predictions. Recent work employs discrete variational integrators
parameterized as a neural network architecture to learn conservative Lagrangian
systems. The learned model captures and enforces global energy preserving
properties of the system from very few trajectories. However, most real systems
are inherently non-conservative and, in practice, we would also like to apply
actuation. In this paper we extend this paradigm to account for general forcing
(e.g. control input and damping) via discrete d'Alembert's principle which may
ultimately be used for control applications. We show that this forced
variational integrator networks (FVIN) architecture allows us to accurately
account for energy dissipation and external forcing while still capturing the
true underlying energy-based passive dynamics. We show that in application this
can result in highly-data efficient model-based control and can predict on real
non-conservative systems.
Related papers
- Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling [41.82469276824927]
We present a framework that achieves high-precision modeling for a wide range of dynamical systems.
It helps preserve energies for conservative systems while serving as a strong inductive bias for non-conservative, reversible systems.
By integrating the TRS loss within neural ordinary differential equation models, the proposed model TREAT demonstrates superior performance on diverse physical systems.
arXiv Detail & Related papers (2024-10-08T21:04:01Z) - TANGO: Time-Reversal Latent GraphODE for Multi-Agent Dynamical Systems [43.39754726042369]
We propose a simple-yet-effective self-supervised regularization term as a soft constraint that aligns the forward and backward trajectories predicted by a continuous graph neural network-based ordinary differential equation (GraphODE)
It effectively imposes time-reversal symmetry to enable more accurate model predictions across a wider range of dynamical systems under classical mechanics.
Experimental results on a variety of physical systems demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-10-10T08:52:16Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Unifying physical systems' inductive biases in neural ODE using dynamics
constraints [0.0]
We provide a simple method that could be applied to not just energy-conserving systems, but also dissipative systems.
The proposed method does not require changing the neural network architecture and could form the basis to validate a novel idea.
arXiv Detail & Related papers (2022-08-03T14:33:35Z) - Gradient-Enhanced Physics-Informed Neural Networks for Power Systems
Operational Support [36.96271320953622]
This paper introduces a machine learning method to approximate the behavior of power systems dynamics in near real time.
The proposed framework is based on gradient-enhanced physics-informed neural networks (gPINNs) and encodes the underlying physical laws governing power systems.
arXiv Detail & Related papers (2022-06-21T17:56:55Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.