Physics-enhanced Gaussian Process Variational Autoencoder
- URL: http://arxiv.org/abs/2305.09006v1
- Date: Mon, 15 May 2023 20:41:39 GMT
- Title: Physics-enhanced Gaussian Process Variational Autoencoder
- Authors: Thomas Beckers, Qirui Wu, George J. Pappas
- Abstract summary: Variational autoencoders allow to learn a lower-dimensional latent space based on high-dimensional input/output data.
We propose a physics-enhanced variational autoencoder that places a physical-enhanced Gaussian process prior on the latent dynamics.
The benefits of the proposed approach are highlighted in a simulation with an oscillating particle.
- Score: 21.222154875601984
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Variational autoencoders allow to learn a lower-dimensional latent space
based on high-dimensional input/output data. Using video clips as input data,
the encoder may be used to describe the movement of an object in the video
without ground truth data (unsupervised learning). Even though the object's
dynamics is typically based on first principles, this prior knowledge is mostly
ignored in the existing literature. Thus, we propose a physics-enhanced
variational autoencoder that places a physical-enhanced Gaussian process prior
on the latent dynamics to improve the efficiency of the variational autoencoder
and to allow physically correct predictions. The physical prior knowledge
expressed as linear dynamical system is here reflected by the Green's function
and included in the kernel function of the Gaussian process. The benefits of
the proposed approach are highlighted in a simulation with an oscillating
particle.
Related papers
- Interpretable Representation Learning from Videos using Nonlinear Priors [15.779730667509915]
We propose a deep learning framework where one can specify nonlinear priors for videos.
We do this by extending the Variational Auto-Encoder (VAE) prior from a simple isotropic Gaussian to an arbitrary nonlinear temporal Additive Noise Model (ANM)
We validate the method on different real-world physics videos including a pendulum, a mass on a spring, a falling object and a pulsar.
arXiv Detail & Related papers (2024-10-24T08:39:24Z) - Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics [48.99021224773799]
We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections.
We also propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images.
arXiv Detail & Related papers (2024-10-10T17:43:36Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video [58.043569985784806]
We introduce latent intuitive physics, a transfer learning framework for physics simulation.
It can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes.
We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation.
arXiv Detail & Related papers (2024-06-18T16:37:44Z) - Neural Categorical Priors for Physics-Based Character Control [12.731392285646614]
We propose a new learning framework for controlling physics-based characters with significantly improved motion quality and diversity.
The proposed method uses reinforcement learning (RL) to initially track and imitate life-like movements from unstructured motion clips.
We conduct comprehensive experiments using humanoid characters on two challenging downstream tasks, sword-shield striking and two-player boxing game.
arXiv Detail & Related papers (2023-08-14T15:10:29Z) - On the Integration of Physics-Based Machine Learning with Hierarchical
Bayesian Modeling Techniques [0.0]
This paper proposes to embed mechanics-based models into the mean function of a Gaussian Process (GP) model and characterize potential discrepancies through kernel machines.
The stationarity of the kernel function is a difficult hurdle in the sequential processing of long data sets, resolved through hierarchical Bayesian techniques.
Using numerical and experimental examples, potential applications of the proposed method to structural dynamics inverse problems are demonstrated.
arXiv Detail & Related papers (2023-03-01T02:29:41Z) - The dynamics of representation learning in shallow, non-linear
autoencoders [3.1219977244201056]
We study the dynamics of feature learning in non-linear, shallow autoencoders.
An analysis of the long-time dynamics explains the failure of sigmoidal autoencoders to learn with tied weights.
We show that our equations accurately describe the generalisation dynamics of non-linear autoencoders on realistic datasets.
arXiv Detail & Related papers (2022-01-06T15:57:31Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Fast Modeling and Understanding Fluid Dynamics Systems with
Encoder-Decoder Networks [0.0]
We show that an accurate deep-learning-based proxy model can be taught efficiently by a finite-volume-based simulator.
Compared to traditional simulation, the proposed deep learning approach enables much faster forward computation.
We quantify the sensitivity of the deep learning model to key physical parameters and hence demonstrate that the inversion problems can be solved with great acceleration.
arXiv Detail & Related papers (2020-06-09T17:14:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.