Operator Inference Aware Quadratic Manifolds with Isotropic Reduced Coordinates for Nonintrusive Model Reduction
- URL: http://arxiv.org/abs/2507.20463v1
- Date: Mon, 28 Jul 2025 01:45:55 GMT
- Title: Operator Inference Aware Quadratic Manifolds with Isotropic Reduced Coordinates for Nonintrusive Model Reduction
- Authors: Paul Schwerdtner, Prakash Mohan, Julie Bessac, Marc T. Henry de Frahan, Benjamin Peherstorfer,
- Abstract summary: We propose a greedy training procedure that takes into account both the reconstruction error on the snapshot data and the prediction error of reduced models fitted to the data.
- Score: 0.9586962006710554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quadratic manifolds for nonintrusive reduced modeling are typically trained to minimize the reconstruction error on snapshot data, which means that the error of models fitted to the embedded data in downstream learning steps is ignored. In contrast, we propose a greedy training procedure that takes into account both the reconstruction error on the snapshot data and the prediction error of reduced models fitted to the data. Because our procedure learns quadratic manifolds with the objective of achieving accurate reduced models, it avoids oscillatory and other non-smooth embeddings that can hinder learning accurate reduced models. Numerical experiments on transport and turbulent flow problems show that quadratic manifolds trained with the proposed greedy approach lead to reduced models with up to two orders of magnitude higher accuracy than quadratic manifolds trained with respect to the reconstruction error alone.
Related papers
- The Double Descent Behavior in Two Layer Neural Network for Binary Classification [46.3107850275261]
Recent studies observed a surprising concept on model test error called the double descent phenomenon.<n>Our aim is to observe and investigate the mathematical theory behind the double descent behavior of model test error for varying model sizes.
arXiv Detail & Related papers (2025-04-27T20:29:24Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - Parameter uncertainties for imperfect surrogate models in the low-noise regime [0.3069335774032178]
We analyze the generalization error of misspecified, near-deterministic surrogate models.
We show posterior distributions must cover every training point to avoid a divergent generalization error.
This is demonstrated on model problems before application to thousand dimensional datasets in atomistic machine learning.
arXiv Detail & Related papers (2024-02-02T11:41:21Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - A DeepONet multi-fidelity approach for residual learning in reduced
order modeling [0.0]
We introduce a novel approach to enhance the precision of reduced order models by exploiting a multi-fidelity perspective and DeepONets.
We propose to couple the model reduction to a machine learning residual learning, such that the above-mentioned error can be learned by a neural network and inferred for new predictions.
arXiv Detail & Related papers (2023-02-24T15:15:07Z) - Churn Reduction via Distillation [54.5952282395487]
We show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn.
We then show that distillation performs strongly for low churn training against a number of recent baselines.
arXiv Detail & Related papers (2021-06-04T18:03:31Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Operator inference of non-Markovian terms for learning reduced models
from partially observed state trajectories [0.0]
This work introduces a non-intrusive model reduction approach for learning reduced models from trajectories of high-dimensional dynamical systems.
The proposed approach compensates for the loss of information due to the partially observed states by constructing non-Markovian reduced models.
Numerical results demonstrate that the proposed approach leads to non-Markovian reduced models that are predictive far beyond the training regime.
arXiv Detail & Related papers (2021-03-01T23:55:52Z) - Probabilistic error estimation for non-intrusive reduced models learned
from data of systems governed by linear parabolic partial differential
equations [0.0]
This work derives a residual-based a posteriori error estimator for reduced models learned with non-intrusive model reduction.
It is shown that quantities that are necessary for the error estimator can be either obtained exactly as the solutions of least-squares problems in a non-intrusive way.
arXiv Detail & Related papers (2020-05-12T16:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.