Uncovering Closed-form Governing Equations of Nonlinear Dynamics from
Videos
- URL: http://arxiv.org/abs/2106.04776v1
- Date: Wed, 9 Jun 2021 02:50:11 GMT
- Title: Uncovering Closed-form Governing Equations of Nonlinear Dynamics from
Videos
- Authors: Lele Luan, Yang Liu, Hao Sun
- Abstract summary: We introduce a novel end-to-end unsupervised deep learning framework to uncover the mathematical structure of equations that governs the dynamics of moving objects in videos.
Such an architecture consists of (1) an encoder-decoder network that learns low-dimensional spatial/pixel coordinates of the moving object, (2) a learnable Spatial-Physical Transformation component that creates mapping between the extracted spatial/pixel coordinates and the latent physical states of dynamics, and (3) a numerical integrator-based sparse regression module that uncovers the parsimonious closed-form governing equations of learned physical states.
- Score: 8.546520029145853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distilling analytical models from data has the potential to advance our
understanding and prediction of nonlinear dynamics. Although discovery of
governing equations based on observed system states (e.g., trajectory time
series) has revealed success in a wide range of nonlinear dynamics, uncovering
the closed-form equations directly from raw videos still remains an open
challenge. To this end, we introduce a novel end-to-end unsupervised deep
learning framework to uncover the mathematical structure of equations that
governs the dynamics of moving objects in videos. Such an architecture consists
of (1) an encoder-decoder network that learns low-dimensional spatial/pixel
coordinates of the moving object, (2) a learnable Spatial-Physical
Transformation component that creates mapping between the extracted
spatial/pixel coordinates and the latent physical states of dynamics, and (3) a
numerical integrator-based sparse regression module that uncovers the
parsimonious closed-form governing equations of learned physical states and,
meanwhile, serves as a constraint to the autoencoder. The efficacy of the
proposed method is demonstrated by uncovering the governing equations of a
variety of nonlinear dynamical systems depicted by moving objects in videos.
The resulting computational framework enables discovery of parsimonious
interpretable model in a flexible and accessible sensing environment where only
videos are available.
Related papers
- Learning Physics From Video: Unsupervised Physical Parameter Estimation for Continuous Dynamical Systems [49.11170948406405]
State-of-the-art in automatic parameter estimation from video is addressed by training supervised deep networks on large datasets.
We propose a method to estimate the physical parameters of any known, continuous governing equation from single videos.
arXiv Detail & Related papers (2024-10-02T09:44:54Z) - Discovering Governing equations from Graph-Structured Data by Sparse Identification of Nonlinear Dynamical Systems [0.27624021966289597]
We develop a new method called Sparse Identification of Dynamical Systems from Graph-structured data (SINDyG)
SINDyG incorporates the network structure into sparse regression to identify model parameters that explain the underlying network dynamics.
arXiv Detail & Related papers (2024-09-02T17:51:37Z) - Dynamic Scene Understanding through Object-Centric Voxelization and Neural Rendering [57.895846642868904]
We present a 3D generative model named DynaVol-S for dynamic scenes that enables object-centric learning.
voxelization infers per-object occupancy probabilities at individual spatial locations.
Our approach integrates 2D semantic features to create 3D semantic grids, representing the scene through multiple disentangled voxel grids.
arXiv Detail & Related papers (2024-07-30T15:33:58Z) - Vision-based Discovery of Nonlinear Dynamics for 3D Moving Target [11.102585080028945]
We propose a vision-based approach to automatically uncover governing equations of nonlinear dynamics for 3D moving targets via raw videos recorded by a set of cameras.
This framework is capable of effectively handling the challenges associated with measurement data, e.g., noise in the video, imprecise tracking of the target that causes data missing, etc.
arXiv Detail & Related papers (2024-04-27T11:13:55Z) - DynaVol: Unsupervised Learning for Dynamic Scenes through Object-Centric
Voxelization [67.85434518679382]
We present DynaVol, a 3D scene generative model that unifies geometric structures and object-centric learning.
The key idea is to perform object-centric voxelization to capture the 3D nature of the scene.
voxel features evolve over time through a canonical-space deformation function, forming the basis for global representation learning.
arXiv Detail & Related papers (2023-04-30T05:29:28Z) - NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos [82.74918564737591]
We present a method for learning 3D geometry and physics parameters of a dynamic scene from only a monocular RGB video input.
Experiments show that our method achieves superior mesh and video reconstruction of dynamic scenes compared to competing Neural Field approaches.
arXiv Detail & Related papers (2022-10-22T04:57:55Z) - Distilling Governing Laws and Source Input for Dynamical Systems from
Videos [13.084113582897965]
Distilling interpretable physical laws from videos has led to expanded interest in the computer vision community.
This paper introduces an end-to-end unsupervised deep learning framework to uncover the explicit governing equations of dynamics presented by moving object(s) based on recorded videos.
arXiv Detail & Related papers (2022-05-03T05:40:01Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Discovering Governing Equations from Partial Measurements with Deep
Delay Autoencoders [4.446017969073817]
A central challenge in data-driven model discovery is the presence of hidden, or latent, variables that are not directly measured but are dynamically important.
Here, we design a custom deep autoencoder network to learn a coordinate transformation from the delay embedded space into a new space.
We demonstrate this approach on the Lorenz, R"ossler, and Lotka-Volterra systems, learning dynamics from a single measurement variable.
arXiv Detail & Related papers (2022-01-13T18:48:16Z) - Physics-informed Spline Learning for Nonlinear Dynamics Discovery [8.546520029145853]
We propose a Physics-informed Spline Learning framework to discover parsimonious governing equations for nonlinear dynamics.
The framework is based on sparsely sampled noisy data.
The efficacy and superiority of the proposed method has been demonstrated by multiple well-known nonlinear dynamical systems.
arXiv Detail & Related papers (2021-05-05T23:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.