Physics-informed active learning with simultaneous weak-form latent space dynamics identification
- URL: http://arxiv.org/abs/2407.00337v2
- Date: Sat, 20 Jul 2024 19:21:42 GMT
- Title: Physics-informed active learning with simultaneous weak-form latent space dynamics identification
- Authors: Xiaolong He, April Tran, David M. Bortz, Youngsoo Choi,
- Abstract summary: We introduce a weak-form estimation of nonlinear dynamics (WENDy) into gLa.
An autoencoder and WENDy are trained simultaneously to discover latent-space dynamics of high-dimensional data.
We show that WgLa outperforms gLa by orders of magnitude, achieving 1-7% relative errors.
- Score: 0.2999888908665658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The parametric greedy latent space dynamics identification (gLaSDI) framework has demonstrated promising potential for accurate and efficient modeling of high-dimensional nonlinear physical systems. However, it remains challenging to handle noisy data. To enhance robustness against noise, we incorporate the weak-form estimation of nonlinear dynamics (WENDy) into gLaSDI. In the proposed weak-form gLaSDI (WgLaSDI) framework, an autoencoder and WENDy are trained simultaneously to discover intrinsic nonlinear latent-space dynamics of high-dimensional data. Compared to the standard sparse identification of nonlinear dynamics (SINDy) employed in gLaSDI, WENDy enables variance reduction and robust latent space discovery, therefore leading to more accurate and efficient reduced-order modeling. Furthermore, the greedy physics-informed active learning in WgLaSDI enables adaptive sampling of optimal training data on the fly for enhanced modeling accuracy. The effectiveness of the proposed framework is demonstrated by modeling various nonlinear dynamical problems, including viscous and inviscid Burgers' equations, time-dependent radial advection, and the Vlasov equation for plasma physics. With data that contains 5-10% Gaussian white noise, WgLaSDI outperforms gLaSDI by orders of magnitude, achieving 1-7% relative errors. Compared with the high-fidelity models, WgLaSDI achieves 121 to 1,779x speed-up.
Related papers
- Parametric Taylor series based latent dynamics identification neural networks [0.3139093405260182]
A new latent identification of nonlinear dynamics, P-TLDINets, is introduced.
It relies on a novel neural network structure based on Taylor series expansion and ResNets.
arXiv Detail & Related papers (2024-10-05T15:10:32Z) - A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling [0.20742830443146304]
We focus on a framework known as Latent Space Dynamics Identification (La), which transforms the high-fidelity data, governed by a PDE, to simpler and low-dimensional data, governed by ordinary differential equations (ODEs)
Each building block of La can be easily modulated depending on the application, which makes the La framework highly flexible.
We demonstrate the performance of different La approaches on Burgers equation, a non-linear heat conduction problem, and a plasma physics problem, showing that La algorithms can achieve relative errors of less than a few percent and up to thousands of times speed-ups.
arXiv Detail & Related papers (2024-03-16T00:45:06Z) - Weak-Form Latent Space Dynamics Identification [0.2999888908665658]
Recent work in data-driven modeling has demonstrated that a weak formulation of model equations enhances the noise robustness of computational methods.
We demonstrate the power of the weak form to enhance the La (Latent Space Dynamics Identification) algorithm, a recently developed data-driven reduced order modeling technique.
arXiv Detail & Related papers (2023-11-20T18:42:14Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Certified data-driven physics-informed greedy auto-encoder simulator [0.5249805590164902]
An auto-encoder and dynamics identification models are trained interactively to discover intrinsic and simple latent-space dynamics.
An adaptive greedy sampling algorithm integrated with a physics-informed error indicator is introduced to search for optimal training samples on the fly.
Numerical results demonstrate that the proposed method achieves parametric 121 to 2,658x speed-up with 1 to 5% relative errors for radial advection and 2D Burgers dynamical problems.
arXiv Detail & Related papers (2022-11-24T16:22:51Z) - gLaSDI: Parametric Physics-informed Greedy Latent Space Dynamics
Identification [0.5249805590164902]
A physics-informed greedy Latent Space Dynamics Identification (gLa) method is proposed for accurate, efficient, and robust data-driven reduced-order modeling.
An interactive training algorithm is adopted for the autoencoder and local DI models, which enables identification of simple latent-space dynamics.
The effectiveness of the proposed framework is demonstrated by modeling various nonlinear dynamical problems.
arXiv Detail & Related papers (2022-04-26T00:15:46Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object
Manipulation [135.10594078615952]
We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects.
A benchmark contains over 17,000 action trajectories with six types of plush toys and 78 variants.
Our model achieves the best performance in geometry, correspondence, and dynamics predictions.
arXiv Detail & Related papers (2022-03-14T04:56:55Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.