Probabilistic error estimation for non-intrusive reduced models learned
from data of systems governed by linear parabolic partial differential
equations
- URL: http://arxiv.org/abs/2005.05890v1
- Date: Tue, 12 May 2020 16:08:05 GMT
- Title: Probabilistic error estimation for non-intrusive reduced models learned
from data of systems governed by linear parabolic partial differential
equations
- Authors: Wayne Isaac Tan Uy and Benjamin Peherstorfer
- Abstract summary: This work derives a residual-based a posteriori error estimator for reduced models learned with non-intrusive model reduction.
It is shown that quantities that are necessary for the error estimator can be either obtained exactly as the solutions of least-squares problems in a non-intrusive way.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work derives a residual-based a posteriori error estimator for reduced
models learned with non-intrusive model reduction from data of high-dimensional
systems governed by linear parabolic partial differential equations with
control inputs. It is shown that quantities that are necessary for the error
estimator can be either obtained exactly as the solutions of least-squares
problems in a non-intrusive way from data such as initial conditions, control
inputs, and high-dimensional solution trajectories or bounded in a
probabilistic sense. The computational procedure follows an offline/online
decomposition. In the offline (training) phase, the high-dimensional system is
judiciously solved in a black-box fashion to generate data and to set up the
error estimator. In the online phase, the estimator is used to bound the error
of the reduced-model predictions for new initial conditions and new control
inputs without recourse to the high-dimensional system. Numerical results
demonstrate the workflow of the proposed approach from data to reduced models
to certified predictions.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Parameter uncertainties for imperfect surrogate models in the low-noise regime [0.3069335774032178]
We analyze the generalization error of misspecified, near-deterministic surrogate models.
We show posterior distributions must cover every training point to avoid a divergent generalization error.
This is demonstrated on model problems before application to thousand dimensional datasets in atomistic machine learning.
arXiv Detail & Related papers (2024-02-02T11:41:21Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Dimensionality Collapse: Optimal Measurement Selection for Low-Error
Infinite-Horizon Forecasting [3.5788754401889022]
We solve the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram'er-Rao lower bound (CRLB) for forecasting as the cost.
By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived.
This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting.
arXiv Detail & Related papers (2023-03-27T17:25:04Z) - LMI-based Data-Driven Robust Model Predictive Control [0.1473281171535445]
We propose a data-driven robust linear matrix inequality-based model predictive control scheme that considers input and state constraints.
The controller stabilizes the closed-loop system and guarantees constraint satisfaction.
arXiv Detail & Related papers (2023-03-08T18:20:06Z) - GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse
Problems with Denoising Diffusion Restoration [64.8770356696056]
We propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown.
The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning.
arXiv Detail & Related papers (2023-01-30T06:27:48Z) - Deep Subspace Encoders for Nonlinear System Identification [0.0]
We propose a method which uses a truncated prediction loss and a subspace encoder for state estimation.
We show that, under mild conditions, the proposed method is locally consistent, increases optimization stability, and achieves increased data efficiency.
arXiv Detail & Related papers (2022-10-26T16:04:38Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Derivative-Based Koopman Operators for Real-Time Control of Robotic
Systems [14.211417879279075]
This paper presents a generalizable methodology for data-driven identification of nonlinear dynamics that bounds the model error.
We construct a Koopman operator-based linear representation and utilize Taylor series accuracy analysis to derive an error bound.
When combined with control, the Koopman representation of the nonlinear system has marginally better performance than competing nonlinear modeling methods.
arXiv Detail & Related papers (2020-10-12T15:15:13Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.