Weak-Form Latent Space Dynamics Identification
- URL: http://arxiv.org/abs/2311.12880v1
- Date: Mon, 20 Nov 2023 18:42:14 GMT
- Title: Weak-Form Latent Space Dynamics Identification
- Authors: April Tran, Xiaolong He, Daniel A. Messenger, Youngsoo Choi, David M.
Bortz
- Abstract summary: Recent work in data-driven modeling has demonstrated that a weak formulation of model equations enhances the noise robustness of computational methods.
We demonstrate the power of the weak form to enhance the La (Latent Space Dynamics Identification) algorithm, a recently developed data-driven reduced order modeling technique.
- Score: 0.2999888908665658
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent work in data-driven modeling has demonstrated that a weak formulation
of model equations enhances the noise robustness of a wide range of
computational methods. In this paper, we demonstrate the power of the weak form
to enhance the LaSDI (Latent Space Dynamics Identification) algorithm, a
recently developed data-driven reduced order modeling technique.
We introduce a weak form-based version WLaSDI (Weak-form Latent Space
Dynamics Identification). WLaSDI first compresses data, then projects onto the
test functions and learns the local latent space models. Notably, WLaSDI
demonstrates significantly enhanced robustness to noise. With WLaSDI, the local
latent space is obtained using weak-form equation learning techniques. Compared
to the standard sparse identification of nonlinear dynamics (SINDy) used in
LaSDI, the variance reduction of the weak form guarantees a robust and precise
latent space recovery, hence allowing for a fast, robust, and accurate
simulation. We demonstrate the efficacy of WLaSDI vs. LaSDI on several common
benchmark examples including viscid and inviscid Burgers', radial advection,
and heat conduction. For instance, in the case of 1D inviscid Burgers'
simulations with the addition of up to 100% Gaussian white noise, the relative
error remains consistently below 6% for WLaSDI, while it can exceed 10,000% for
LaSDI. Similarly, for radial advection simulations, the relative errors stay
below 15% for WLaSDI, in stark contrast to the potential errors of up to
10,000% with LaSDI. Moreover, speedups of several orders of magnitude can be
obtained with WLaSDI. For example applying WLaSDI to 1D Burgers' yields a 140X
speedup compared to the corresponding full order model.
Python code to reproduce the results in this work is available at
(https://github.com/MathBioCU/PyWSINDy_ODE) and
(https://github.com/MathBioCU/PyWLaSDI).
Related papers
- Dynamic Diffusion Transformer [67.13876021157887]
Diffusion Transformer (DiT) has demonstrated superior performance but suffers from substantial computational costs.
We propose Dynamic Diffusion Transformer (DyDiT), an architecture that dynamically adjusts its computation along both timestep and spatial dimensions during generation.
With 3% additional fine-tuning, our method reduces the FLOPs of DiT-XL by 51%, accelerates generation by 1.73, and achieves a competitive FID score of 2.07 on ImageNet.
arXiv Detail & Related papers (2024-10-04T14:14:28Z) - DyFADet: Dynamic Feature Aggregation for Temporal Action Detection [70.37707797523723]
We build a novel dynamic feature aggregation (DFA) module that can adapt kernel weights and receptive fields at different timestamps.
Using DFA helps to develop a Dynamic TAD head (DyHead), which adaptively aggregates the multi-scale features with adjusted parameters.
DyFADet, a new dynamic TAD model, achieves promising performance on a series of challenging TAD benchmarks.
arXiv Detail & Related papers (2024-07-03T15:29:10Z) - Physics-informed active learning with simultaneous weak-form latent space dynamics identification [0.2999888908665658]
We introduce a weak-form estimation of nonlinear dynamics (WENDy) into gLa.
An autoencoder and WENDy are trained simultaneously to discover latent-space dynamics of high-dimensional data.
We show that WgLa outperforms gLa by orders of magnitude, achieving 1-7% relative errors.
arXiv Detail & Related papers (2024-06-29T06:52:59Z) - A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling [0.20742830443146304]
We focus on a framework known as Latent Space Dynamics Identification (La), which transforms the high-fidelity data, governed by a PDE, to simpler and low-dimensional data, governed by ordinary differential equations (ODEs)
Each building block of La can be easily modulated depending on the application, which makes the La framework highly flexible.
We demonstrate the performance of different La approaches on Burgers equation, a non-linear heat conduction problem, and a plasma physics problem, showing that La algorithms can achieve relative errors of less than a few percent and up to thousands of times speed-ups.
arXiv Detail & Related papers (2024-03-16T00:45:06Z) - Can LLMs Separate Instructions From Data? And What Do We Even Mean By That? [60.50127555651554]
Large Language Models (LLMs) show impressive results in numerous practical applications, but they lack essential safety features.
This makes them vulnerable to manipulations such as indirect prompt injections and generally unsuitable for safety-critical tasks.
We introduce a formal measure for instruction-data separation and an empirical variant that is calculable from a model's outputs.
arXiv Detail & Related papers (2024-03-11T15:48:56Z) - Data-Driven Autoencoder Numerical Solver with Uncertainty Quantification
for Fast Physical Simulations [0.0]
We present Traditional, a hybrid deep-learning and Bayesian ROM.a.
We trains an autoencoder on full-order-model (FOM) data and simultaneously learns simpler equations governing the latent space.
Our framework is able to achieve up to 100,000 times speed-up and less than 7% relative error on fluid mechanics problems.
arXiv Detail & Related papers (2023-12-02T04:03:32Z) - DistillSpec: Improving Speculative Decoding via Knowledge Distillation [70.61777015900272]
Speculative decoding (SD) accelerates large language model inference by employing a faster draft model for generating multiple tokens.
We propose DistillSpec that uses knowledge distillation to better align the draft model with the target model, before applying SD.
We show that DistillSpec yields impressive 10 - 45% speedups over standard SD on a range of standard benchmarks.
arXiv Detail & Related papers (2023-10-12T16:21:04Z) - GPLaSDI: Gaussian Process-based Interpretable Latent Space Dynamics Identification through Deep Autoencoder [0.0]
We introduce a novel La Gaussian-based framework that relies on latent space ODEs.
We demonstrate the effectiveness of our approach on the Burgers equation, the Vlasov equation for plasma physics, and a rising thermal bubble problem.
Our proposed method achieves between 200 and 100,000 times speed-up, with up to 7% up relative error.
arXiv Detail & Related papers (2023-08-10T23:54:12Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - gLaSDI: Parametric Physics-informed Greedy Latent Space Dynamics
Identification [0.5249805590164902]
A physics-informed greedy Latent Space Dynamics Identification (gLa) method is proposed for accurate, efficient, and robust data-driven reduced-order modeling.
An interactive training algorithm is adopted for the autoencoder and local DI models, which enables identification of simple latent-space dynamics.
The effectiveness of the proposed framework is demonstrated by modeling various nonlinear dynamical problems.
arXiv Detail & Related papers (2022-04-26T00:15:46Z) - Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling [106.68533003806276]
We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
arXiv Detail & Related papers (2020-03-12T23:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.