Operator inference of non-Markovian terms for learning reduced models
from partially observed state trajectories
- URL: http://arxiv.org/abs/2103.01362v1
- Date: Mon, 1 Mar 2021 23:55:52 GMT
- Title: Operator inference of non-Markovian terms for learning reduced models
from partially observed state trajectories
- Authors: Wayne Isaac Tan Uy, Benjamin Peherstorfer
- Abstract summary: This work introduces a non-intrusive model reduction approach for learning reduced models from trajectories of high-dimensional dynamical systems.
The proposed approach compensates for the loss of information due to the partially observed states by constructing non-Markovian reduced models.
Numerical results demonstrate that the proposed approach leads to non-Markovian reduced models that are predictive far beyond the training regime.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces a non-intrusive model reduction approach for learning
reduced models from partially observed state trajectories of high-dimensional
dynamical systems. The proposed approach compensates for the loss of
information due to the partially observed states by constructing non-Markovian
reduced models that make future-state predictions based on a history of reduced
states, in contrast to traditional Markovian reduced models that rely on the
current reduced state alone to predict the next state. The core contributions
of this work are a data sampling scheme to sample partially observed states
from high-dimensional dynamical systems and a formulation of a regression
problem to fit the non-Markovian reduced terms to the sampled states. Under
certain conditions, the proposed approach recovers from data the very same
non-Markovian terms that one obtains with intrusive methods that require the
governing equations and discrete operators of the high-dimensional dynamical
system. Numerical results demonstrate that the proposed approach leads to
non-Markovian reduced models that are predictive far beyond the training
regime. Additionally, in the numerical experiments, the proposed approach
learns non-Markovian reduced models from trajectories with only 20% observed
state components that are about as accurate as traditional Markovian reduced
models fitted to trajectories with 99% observed components.
Related papers
- Constructing Concept-based Models to Mitigate Spurious Correlations with Minimal Human Effort [31.992947353231564]
Concept Bottleneck Models (CBMs) can provide a principled way of disclosing and guiding model behaviors through human-understandable concepts.
We propose a novel framework designed to exploit pre-trained models while being immune to these biases, thereby reducing vulnerability to spurious correlations.
We evaluate the proposed method on multiple datasets, and the results demonstrate its effectiveness in reducing model reliance on spurious correlations while preserving its interpretability.
arXiv Detail & Related papers (2024-07-12T03:07:28Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - Model order reduction of deep structured state-space models: A system-theoretic approach [0.0]
deep structured state-space models offer high predictive performance.
The learned representations often suffer from excessively large model orders, which render them unsuitable for control design purposes.
We introduce two regularization terms which can be incorporated into the training loss for improved model order reduction.
The presented regularizers lead to advantages in terms of parsimonious representations and faster inference resulting from the reduced order models.
arXiv Detail & Related papers (2024-03-21T21:05:59Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Principled Pruning of Bayesian Neural Networks through Variational Free
Energy Minimization [2.3999111269325266]
We formulate and apply Bayesian model reduction to perform principled pruning of Bayesian neural networks.
A novel iterative pruning algorithm is presented to alleviate the problems arising with naive Bayesian model reduction.
Our experiments indicate better model performance in comparison to state-of-the-art pruning schemes.
arXiv Detail & Related papers (2022-10-17T14:34:42Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Active operator inference for learning low-dimensional dynamical-system
models from noisy data [0.0]
Noise poses a challenge for learning dynamical-system models because already small variations can distort the dynamics described by trajectory data.
This work builds on operator inference from scientific machine learning to infer low-dimensional models from high-dimensional state trajectories polluted with noise.
arXiv Detail & Related papers (2021-07-20T04:30:07Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Progressive residual learning for single image dehazing [57.651704852274825]
A progressive residual learning strategy has been proposed to combine the physical model-free dehazing process with reformulated scattering model-based dehazing operations.
The proposed method performs favorably against the state-of-the-art methods on public dehazing benchmarks with better model interpretability and adaptivity for complex data.
arXiv Detail & Related papers (2021-03-14T16:54:44Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - Probabilistic error estimation for non-intrusive reduced models learned
from data of systems governed by linear parabolic partial differential
equations [0.0]
This work derives a residual-based a posteriori error estimator for reduced models learned with non-intrusive model reduction.
It is shown that quantities that are necessary for the error estimator can be either obtained exactly as the solutions of least-squares problems in a non-intrusive way.
arXiv Detail & Related papers (2020-05-12T16:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.