Physically consistent predictive reduced-order modeling by enhancing Operator Inference with state constraints
- URL: http://arxiv.org/abs/2502.03672v1
- Date: Wed, 05 Feb 2025 23:33:31 GMT
- Title: Physically consistent predictive reduced-order modeling by enhancing Operator Inference with state constraints
- Authors: Hyeonghun Kim, Boris Kramer,
- Abstract summary: This paper presents a new approach to augment Operator Inference by embedding state constraints in reduced-order model predictions.
For an application to char combustion, we demonstrate that the proposed approach yields state predictions superior to the other methods regarding stability and accuracy.
- Score: 0.0
- License:
- Abstract: Numerical simulations of complex multiphysics systems, such as char combustion considered herein, yield numerous state variables that inherently exhibit physical constraints. This paper presents a new approach to augment Operator Inference -- a methodology within scientific machine learning that enables learning from data a low-dimensional representation of a high-dimensional system governed by nonlinear partial differential equations -- by embedding such state constraints in the reduced-order model predictions. In the model learning process, we propose a new way to choose regularization hyperparameters based on a key performance indicator. Since embedding state constraints improves the stability of the Operator Inference reduced-order model, we compare the proposed state constraints-embedded Operator Inference with the standard Operator Inference and other stability-enhancing approaches. For an application to char combustion, we demonstrate that the proposed approach yields state predictions superior to the other methods regarding stability and accuracy. It extrapolates over 200\% past the training regime while being computationally efficient and physically consistent.
Related papers
- Chaos into Order: Neural Framework for Expected Value Estimation of Stochastic Partial Differential Equations [0.9944647907864256]
We introduce a novel neural framework for SPDE estimation that eliminates the need for discretization and explicitly modeling uncertainty.
This is the first neural framework capable directly estimating the expected values of SPDEs in an entirely non-discretized manner, offering a step forward in scientific computing.
Our findings highlight the immense potential of neural-based SPDE solvers, particularly for high-dimensional problems where conventional techniques falter.
arXiv Detail & Related papers (2025-02-05T23:27:28Z) - On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Guaranteed Stable Quadratic Models and their applications in SINDy and
Operator Inference [9.599029891108229]
We focus on an operator inference methodology that builds dynamical models.
For inference, we aim to learn the operators of a model by setting up an appropriate optimization problem.
We present several numerical examples, illustrating the preservation of stability and discussing its comparison with the existing state-of-the-art approach to infer operators.
arXiv Detail & Related papers (2023-08-26T09:00:31Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Physics-informed regularization and structure preservation for learning
stable reduced models from data with operator inference [0.0]
Operator inference learns low-dimensional dynamical-system models with nonlinear terms from trajectories of high-dimensional physical systems.
A regularizer for operator inference that induces a stability bias onto quadratic models is proposed.
A formulation of operator inference is proposed that enforces model constraints for preserving structure.
arXiv Detail & Related papers (2021-07-06T13:15:54Z) - Non-intrusive Nonlinear Model Reduction via Machine Learning
Approximations to Low-dimensional Operators [0.0]
We propose a method that enables traditionally intrusive reduced-order models to be accurately approximated in a non-intrusive manner.
The approach approximates the low-dimensional operators associated with projection-based reduced-order models (ROMs) using modern machine-learning regression techniques.
In addition to enabling nonintrusivity, we demonstrate that the approach also leads to very low computational complexity, achieving up to $1000times$ reduction in run time.
arXiv Detail & Related papers (2021-06-17T17:04:42Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - DISCO: Double Likelihood-free Inference Stochastic Control [29.84276469617019]
We propose to leverage the power of modern simulators and recent techniques in Bayesian statistics for likelihood-free inference.
The posterior distribution over simulation parameters is propagated through a potentially non-analytical model of the system.
Experiments show that the controller proposed attained superior performance and robustness on classical control and robotics tasks.
arXiv Detail & Related papers (2020-02-18T05:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.