Energy-Preserving Reduced Operator Inference for Efficient Design and
Control
- URL: http://arxiv.org/abs/2401.02889v2
- Date: Wed, 7 Feb 2024 21:38:35 GMT
- Title: Energy-Preserving Reduced Operator Inference for Efficient Design and
Control
- Authors: Tomoki Koike, Elizabeth Qian
- Abstract summary: This work presents a physics-preserving reduced model learning approach that targets partial differential equations.
EP-OpInf learns efficient and accurate reduced models that retain this energy-preserving structure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many-query computations, in which a computational model for an engineering
system must be evaluated many times, are crucial in design and control. For
systems governed by partial differential equations (PDEs), typical
high-fidelity numerical models are high-dimensional and too computationally
expensive for the many-query setting. Thus, efficient surrogate models are
required to enable low-cost computations in design and control. This work
presents a physics-preserving reduced model learning approach that targets PDEs
whose quadratic operators preserve energy, such as those arising in governing
equations in many fluids problems. The approach is based on the Operator
Inference method, which fits reduced model operators to state snapshot and time
derivative data in a least-squares sense. However, Operator Inference does not
generally learn a reduced quadratic operator with the energy-preserving
property of the original PDE. Thus, we propose a new energy-preserving Operator
Inference (EP-OpInf) approach, which imposes this structure on the learned
reduced model via constrained optimization. Numerical results using the viscous
Burgers' and Kuramoto-Sivashinksy equation (KSE) demonstrate that EP-OpInf
learns efficient and accurate reduced models that retain this energy-preserving
structure.
Related papers
- DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs [70.91804882618243]
This paper proposes DSMoE, a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks.
We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge.
Experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches.
arXiv Detail & Related papers (2025-02-18T02:37:26Z) - A Deep Learning approach for parametrized and time dependent Partial Differential Equations using Dimensionality Reduction and Neural ODEs [46.685771141109306]
We propose an autoregressive and data-driven method using the analogy with classical numerical solvers for time-dependent, parametric and (typically) nonlinear PDEs.
We show that by leveraging DR we can deliver not only more accurate predictions, but also a considerably lighter and faster Deep Learning model.
arXiv Detail & Related papers (2025-02-12T11:16:15Z) - Physically consistent predictive reduced-order modeling by enhancing Operator Inference with state constraints [0.0]
This paper presents a new approach to augment Operator Inference by embedding state constraints in reduced-order model predictions.
For an application to char combustion, we demonstrate that the proposed approach yields state predictions superior to the other methods regarding stability and accuracy.
arXiv Detail & Related papers (2025-02-05T23:33:31Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Structure-preserving learning for multi-symplectic PDEs [8.540823673172403]
This paper presents an energy-preserving machine learning method for inferring reduced-order models (ROMs) by exploiting the multi-symplectic form of partial differential equations (PDEs)
We prove that the proposed method satisfies spatially discrete local energy conservation and preserves the multi-symplectic conservation laws.
arXiv Detail & Related papers (2024-09-16T16:07:21Z) - PICL: Physics Informed Contrastive Learning for Partial Differential Equations [7.136205674624813]
We develop a novel contrastive pretraining framework that improves neural operator generalization across multiple governing equations simultaneously.
A combination of physics-informed system evolution and latent-space model output are anchored to input data and used in our distance function.
We find that physics-informed contrastive pretraining improves accuracy for the Fourier Neural Operator in fixed-future and autoregressive rollout tasks for the 1D and 2D Heat, Burgers', and linear advection equations.
arXiv Detail & Related papers (2024-01-29T17:32:22Z) - Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared
Pre-trained Language Models [109.06052781040916]
We introduce a technique to enhance the inference efficiency of parameter-shared language models.
We also propose a simple pre-training technique that leads to fully or partially shared models.
Results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs.
arXiv Detail & Related papers (2023-10-19T15:13:58Z) - Efficient Neural PDE-Solvers using Quantization Aware Training [71.0934372968972]
We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
arXiv Detail & Related papers (2023-08-14T09:21:19Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Reduced operator inference for nonlinear partial differential equations [2.389598109913753]
We present a new machine learning method that learns from data a surrogate model for predicting the evolution of a system governed by a time-dependent nonlinear partial differential equation (PDE)
Our formulation generalizes the Operator Inference method previously developed in [B. Peherstorfer and K. Willcox, Data-driven operator inference for non-intrusive projection-based model reduction, Computer Methods in Applied Mechanics and Engineering, 306] for systems governed by ordinary differential equations.
arXiv Detail & Related papers (2021-01-29T21:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.