Efficient nonlinear manifold reduced order model
- URL: http://arxiv.org/abs/2011.07727v1
- Date: Fri, 13 Nov 2020 18:46:21 GMT
- Title: Efficient nonlinear manifold reduced order model
- Authors: Youngkyu Kim and Youngsoo Choi and David Widemann and Tarek Zohdi
- Abstract summary: nonlinear manifold ROM (NM-ROM) can better approximate high-fidelity model solutions with a smaller latent space dimension than the LS-ROMs.
Results show that neural networks can learn a more efficient latent space representation on advection-dominated data from 2D Burgers' equations with a high Reynolds number.
- Score: 0.19116784879310023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional linear subspace reduced order models (LS-ROMs) are able to
accelerate physical simulations, in which the intrinsic solution space falls
into a subspace with a small dimension, i.e., the solution space has a small
Kolmogorov n-width. However, for physical phenomena not of this type, such as
advection-dominated flow phenomena, a low-dimensional linear subspace poorly
approximates the solution. To address cases such as these, we have developed an
efficient nonlinear manifold ROM (NM-ROM), which can better approximate
high-fidelity model solutions with a smaller latent space dimension than the
LS-ROMs. Our method takes advantage of the existing numerical methods that are
used to solve the corresponding full order models (FOMs). The efficiency is
achieved by developing a hyper-reduction technique in the context of the
NM-ROM. Numerical results show that neural networks can learn a more efficient
latent space representation on advection-dominated data from 2D Burgers'
equations with a high Reynolds number. A speed-up of up to 11.7 for 2D Burgers'
equations is achieved with an appropriate treatment of the nonlinear terms
through a hyper-reduction technique.
Related papers
- Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - The role of interface boundary conditions and sampling strategies for Schwarz-based coupling of projection-based reduced order models [0.0]
We present a framework for the coupling of subdomain-local projection-based reduced order models (PROMs) using the Schwarz alternating method.
We show that it is possible to obtain a stable and accurate coupled model utilizing Dirichlet-Dirichlet (rather than Robin-Robin or alternating Dirichlet-Neumann) transmission BCs on the subdomain boundaries.
Our numerical results suggest that the proposed methodology has the potential to improve PROM accuracy.
arXiv Detail & Related papers (2024-10-07T00:44:22Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Symplectic model reduction of Hamiltonian systems using data-driven
quadratic manifolds [0.559239450391449]
We present two novel approaches for the symplectic model reduction of high-dimensional Hamiltonian systems.
The addition of quadratic terms to the state approximation, which sits at the heart of the proposed methodologies, enables us to better represent intrinsic low-dimensionality.
arXiv Detail & Related papers (2023-05-24T18:23:25Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Deep-HyROMnet: A deep learning-based operator approximation for
hyper-reduction of nonlinear parametrized PDEs [0.0]
We propose a strategy for learning nonlinear ROM operators using deep neural networks (DNNs)
The resulting hyper-reduced order model enhanced by DNNs is referred to as Deep-HyROMnet.
Numerical results show that Deep-HyROMnets are orders of magnitude faster than POD-GalerkinDEIMs, keeping the same level of accuracy.
arXiv Detail & Related papers (2022-02-05T23:45:25Z) - Data-Driven Reduced-Order Modeling of Spatiotemporal Chaos with Neural
Ordinary Differential Equations [0.0]
We present a data-driven reduced order modeling method that capitalizes on the chaotic dynamics of partial differential equations.
We find that dimension reduction improves performance relative to predictions in the ambient space.
With the low-dimensional model, we find excellent short- and long-time statistical recreation of the true dynamics for widely spaced data.
arXiv Detail & Related papers (2021-08-31T20:00:33Z) - A fast and accurate physics-informed neural network reduced order model
with shallow masked autoencoder [0.19116784879310023]
nonlinear manifold ROM (NM-ROM) can better approximate high-fidelity model solutions with a smaller latent space dimension than the LS-ROMs.
Results show that neural networks can learn a more efficient latent space representation on advection-dominated data.
arXiv Detail & Related papers (2020-09-25T00:48:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.