Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction
- URL: http://arxiv.org/abs/2202.11432v1
- Date: Wed, 23 Feb 2022 11:23:59 GMT
- Title: Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction
- Authors: Aleksandr Katrutsa, Sergey Utyuzhnikov, Ivan Oseledets
- Abstract summary: The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
- Score: 69.81996031777717
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Dynamic Mode Decomposition has proved to be a very efficient technique to
study dynamic data. This is entirely a data-driven approach that extracts all
necessary information from data snapshots which are commonly supposed to be
sampled from measurement. The application of this approach becomes problematic
if the available data is incomplete because some dimensions of smaller scale
either missing or unmeasured. Such setting occurs very often in modeling
complex dynamical systems such as power grids, in particular with reduced-order
modeling. To take into account the effect of unresolved variables the optimal
prediction approach based on the Mori-Zwanzig formalism can be applied to
obtain the most expected prediction under existing uncertainties. This
effectively leads to the development of a time-predictive model accounting for
the impact of missing data. In the present paper we provide a detailed
derivation of the considered method from the Liouville equation and finalize it
with the optimization problem that defines the optimal transition operator
corresponding to the observed data. In contrast to the existing approach, we
consider a first-order approximation of the Mori-Zwanzig decomposition, state
the corresponding optimization problem and solve it with the gradient-based
optimization method. The gradient of the obtained objective function is
computed precisely through the automatic differentiation technique. The
numerical experiments illustrate that the considered approach gives practically
the same dynamics as the exact Mori-Zwanzig decomposition, but is less
computationally intensive.
Related papers
- Asymptotically Optimal Change Detection for Unnormalized Pre- and Post-Change Distributions [65.38208224389027]
This paper addresses the problem of detecting changes when only unnormalized pre- and post-change distributions are accessible.
Our approach is based on the estimation of the Cumulative Sum statistics, which is known to produce optimal performance.
arXiv Detail & Related papers (2024-10-18T17:13:29Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - REMEDI: Corrective Transformations for Improved Neural Entropy Estimation [0.7488108981865708]
We introduce $textttREMEDI$ for efficient and accurate estimation of differential entropy.
Our approach demonstrates improvement across a broad spectrum of estimation tasks.
It can be naturally extended to information theoretic supervised learning models.
arXiv Detail & Related papers (2024-02-08T14:47:37Z) - Data-driven decision-focused surrogate modeling [10.1947610432159]
We introduce the concept of decision-focused surrogate modeling for solving challenging nonlinear optimization problems in real-time settings.
The proposed data-driven framework seeks to learn a simpler, e.g. convex, surrogate optimization model that is trained to minimize the decision prediction error.
We validate our framework through numerical experiments involving the optimization of common nonlinear chemical processes.
arXiv Detail & Related papers (2023-08-23T14:23:26Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Low-rank statistical finite elements for scalable model-data synthesis [0.8602553195689513]
statFEM acknowledges a priori model misspecification, by embedding forcing within the governing equations.
The method reconstructs the observed data-generating processes with minimal loss of information.
This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix.
arXiv Detail & Related papers (2021-09-10T09:51:43Z) - SODEN: A Scalable Continuous-Time Survival Model through Ordinary
Differential Equation Networks [14.564168076456822]
We propose a flexible model for survival analysis using neural networks along with scalable optimization algorithms.
We demonstrate the effectiveness of the proposed method in comparison to existing state-of-the-art deep learning survival analysis models.
arXiv Detail & Related papers (2020-08-19T19:11:25Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.