LagNetViP: A Lagrangian Neural Network for Video Prediction
- URL: http://arxiv.org/abs/2010.12932v1
- Date: Sat, 24 Oct 2020 16:50:14 GMT
- Title: LagNetViP: A Lagrangian Neural Network for Video Prediction
- Authors: Christine Allen-Blanchette, Sushant Veer, Anirudha Majumdar, Naomi
Ehrich Leonard
- Abstract summary: We introduce a video prediction model where the equations of motion are explicitly constructed from learned representations of the underlying physical quantities.
We demonstrate the efficacy of this approach for video prediction on image sequences rendered in modified OpenAI gym Pendulum-v0 and Acrobot environments.
- Score: 12.645753197663584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dominant paradigms for video prediction rely on opaque transition models
where neither the equations of motion nor the underlying physical quantities of
the system are easily inferred. The equations of motion, as defined by Newton's
second law, describe the time evolution of a physical system state and can
therefore be applied toward the determination of future system states. In this
paper, we introduce a video prediction model where the equations of motion are
explicitly constructed from learned representations of the underlying physical
quantities. To achieve this, we simultaneously learn a low-dimensional state
representation and system Lagrangian. The kinetic and potential energy terms of
the Lagrangian are distinctly modelled and the low-dimensional equations of
motion are explicitly constructed using the Euler-Lagrange equations. We
demonstrate the efficacy of this approach for video prediction on image
sequences rendered in modified OpenAI gym Pendulum-v0 and Acrobot environments.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - GaussianPrediction: Dynamic 3D Gaussian Prediction for Motion Extrapolation and Free View Synthesis [71.24791230358065]
We introduce a novel framework that empowers 3D Gaussian representations with dynamic scene modeling and future scenario synthesis.
GaussianPrediction can forecast future states from any viewpoint, using video observations of dynamic scenes.
Our framework shows outstanding performance on both synthetic and real-world datasets, demonstrating its efficacy in predicting and rendering future environments.
arXiv Detail & Related papers (2024-05-30T06:47:55Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Learning Vortex Dynamics for Fluid Inference and Prediction [25.969713036393895]
We propose a novel machine learning method based on differentiable vortex particles to infer and predict fluid dynamics from a single video.
We devise a novel differentiable vortex particle system in conjunction with their learnable, vortex-to-velocity dynamics mapping to effectively capture and represent the complex flow features in a reduced space.
arXiv Detail & Related papers (2023-01-27T02:10:05Z) - Discrete Lagrangian Neural Networks with Automatic Symmetry Discovery [3.06483729892265]
We introduce a framework to learn a discrete Lagrangian along with its symmetry group from discrete observations of motions.
The learning process does not restrict the form of the Lagrangian, does not require velocity or momentum observations or predictions and incorporates a cost term.
arXiv Detail & Related papers (2022-11-20T00:46:33Z) - Distilling Governing Laws and Source Input for Dynamical Systems from
Videos [13.084113582897965]
Distilling interpretable physical laws from videos has led to expanded interest in the computer vision community.
This paper introduces an end-to-end unsupervised deep learning framework to uncover the explicit governing equations of dynamics presented by moving object(s) based on recorded videos.
arXiv Detail & Related papers (2022-05-03T05:40:01Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - NeuroFluid: Fluid Dynamics Grounding with Particle-Driven Neural
Radiance Fields [65.07940731309856]
Deep learning has shown great potential for modeling the physical dynamics of complex particle systems such as fluids.
In this paper, we consider a partially observable scenario known as fluid dynamics grounding.
We propose a differentiable two-stage network named NeuroFluid.
It is shown to reasonably estimate the underlying physics of fluids with different initial shapes, viscosity, and densities.
arXiv Detail & Related papers (2022-03-03T15:13:29Z) - Physics Informed RNN-DCT Networks for Time-Dependent Partial
Differential Equations [62.81701992551728]
We present a physics-informed framework for solving time-dependent partial differential equations.
Our model utilizes discrete cosine transforms to encode spatial and recurrent neural networks.
We show experimental results on the Taylor-Green vortex solution to the Navier-Stokes equations.
arXiv Detail & Related papers (2022-02-24T20:46:52Z) - Uncovering Closed-form Governing Equations of Nonlinear Dynamics from
Videos [8.546520029145853]
We introduce a novel end-to-end unsupervised deep learning framework to uncover the mathematical structure of equations that governs the dynamics of moving objects in videos.
Such an architecture consists of (1) an encoder-decoder network that learns low-dimensional spatial/pixel coordinates of the moving object, (2) a learnable Spatial-Physical Transformation component that creates mapping between the extracted spatial/pixel coordinates and the latent physical states of dynamics, and (3) a numerical integrator-based sparse regression module that uncovers the parsimonious closed-form governing equations of learned physical states.
arXiv Detail & Related papers (2021-06-09T02:50:11Z) - Unsupervised Learning of Lagrangian Dynamics from Images for Prediction
and Control [12.691047660244335]
We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images.
The model infers Lagrangian dynamics on generalized coordinates that are simultaneously learned with a coordinate-aware variational autoencoder.
arXiv Detail & Related papers (2020-07-03T20:06:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.