Energy-Preserving Reduced Operator Inference for Efficient Design and
Control
- URL: http://arxiv.org/abs/2401.02889v2
- Date: Wed, 7 Feb 2024 21:38:35 GMT
- Title: Energy-Preserving Reduced Operator Inference for Efficient Design and
Control
- Authors: Tomoki Koike, Elizabeth Qian
- Abstract summary: This work presents a physics-preserving reduced model learning approach that targets partial differential equations.
EP-OpInf learns efficient and accurate reduced models that retain this energy-preserving structure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many-query computations, in which a computational model for an engineering
system must be evaluated many times, are crucial in design and control. For
systems governed by partial differential equations (PDEs), typical
high-fidelity numerical models are high-dimensional and too computationally
expensive for the many-query setting. Thus, efficient surrogate models are
required to enable low-cost computations in design and control. This work
presents a physics-preserving reduced model learning approach that targets PDEs
whose quadratic operators preserve energy, such as those arising in governing
equations in many fluids problems. The approach is based on the Operator
Inference method, which fits reduced model operators to state snapshot and time
derivative data in a least-squares sense. However, Operator Inference does not
generally learn a reduced quadratic operator with the energy-preserving
property of the original PDE. Thus, we propose a new energy-preserving Operator
Inference (EP-OpInf) approach, which imposes this structure on the learned
reduced model via constrained optimization. Numerical results using the viscous
Burgers' and Kuramoto-Sivashinksy equation (KSE) demonstrate that EP-OpInf
learns efficient and accurate reduced models that retain this energy-preserving
structure.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Structure-preserving learning for multi-symplectic PDEs [8.540823673172403]
This paper presents an energy-preserving machine learning method for inferring reduced-order models (ROMs) by exploiting the multi-symplectic form of partial differential equations (PDEs)
We prove that the proposed method satisfies spatially discrete local energy conservation and preserves the multi-symplectic conservation laws.
arXiv Detail & Related papers (2024-09-16T16:07:21Z) - DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Physics-informed Discretization-independent Deep Compositional Operator Network [1.2430809884830318]
We introduce a novel physics-informed model architecture which can generalize to various discrete representations of PDE parameters and irregular domain shapes.
Inspired by deep operator neural networks, our model involves a discretization-independent learning of parameter embedding repeatedly.
Numerical results demonstrate the accuracy and efficiency of the proposed method.
arXiv Detail & Related papers (2024-04-21T12:41:30Z) - PICL: Physics Informed Contrastive Learning for Partial Differential Equations [7.136205674624813]
We develop a novel contrastive pretraining framework that improves neural operator generalization across multiple governing equations simultaneously.
A combination of physics-informed system evolution and latent-space model output are anchored to input data and used in our distance function.
We find that physics-informed contrastive pretraining improves accuracy for the Fourier Neural Operator in fixed-future and autoregressive rollout tasks for the 1D and 2D Heat, Burgers', and linear advection equations.
arXiv Detail & Related papers (2024-01-29T17:32:22Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - An operator preconditioning perspective on training in physics-informed machine learning [17.919648902857517]
We investigate the behavior of gradient descent algorithms in machine learning methods like PINNs.
Our key result is that the difficulty in training these models is closely related to the conditioning of a specific differential operator.
arXiv Detail & Related papers (2023-10-09T15:37:06Z) - Efficient Neural PDE-Solvers using Quantization Aware Training [71.0934372968972]
We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
arXiv Detail & Related papers (2023-08-14T09:21:19Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Reduced operator inference for nonlinear partial differential equations [2.389598109913753]
We present a new machine learning method that learns from data a surrogate model for predicting the evolution of a system governed by a time-dependent nonlinear partial differential equation (PDE)
Our formulation generalizes the Operator Inference method previously developed in [B. Peherstorfer and K. Willcox, Data-driven operator inference for non-intrusive projection-based model reduction, Computer Methods in Applied Mechanics and Engineering, 306] for systems governed by ordinary differential equations.
arXiv Detail & Related papers (2021-01-29T21:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.