Reduced Order Models and Conditional Expectation -- Analysing Parametric Low-Order Approximations
- URL: http://arxiv.org/abs/2412.19836v2
- Date: Thu, 13 Feb 2025 20:12:49 GMT
- Title: Reduced Order Models and Conditional Expectation -- Analysing Parametric Low-Order Approximations
- Authors: Hermann G. Matthies,
- Abstract summary: Systems depend on parameters which one may control, or which serve to optimise the system, or are imposed externally.
In the field of machine learning, also a function of the parameter set into the image space of the machine learning model is learned on a training set of samples.
This offers the possibility of having a combined look at these methods, and also of introducing more general loss functions.
- Score: 0.0
- License:
- Abstract: Systems may depend on parameters which one may control, or which serve to optimise the system, or are imposed externally, or they could be uncertain. This last case is taken as the ``Leitmotiv'' for the following. A reduced order model is produced from the full order model by some kind of projection onto a relatively low-dimensional manifold or subspace. The parameter dependent reduction process produces a function of the parameters into the manifold. One now wants to examine the relation between the full and the reduced state for all possible parameter values of interest. Similarly, in the field of machine learning, also a function of the parameter set into the image space of the machine learning model is learned on a training set of samples, typically minimising the mean-square error. This set may be seen as a sample from some probability distribution, and thus the training is an approximate computation of the expectation, giving an approximation to the conditional expectation, a special case of an Bayesian updating where the Bayesian loss function is the mean-square error. This offers the possibility of having a combined look at these methods, and also of introducing more general loss functions.
Related papers
- Automatic Debiased Machine Learning for Smooth Functionals of Nonparametric M-Estimands [34.30497962430375]
We propose a unified framework for automatic debiased machine learning (autoDML) to perform inference on smooth functionals of infinite-dimensional M-estimands.
We introduce three autoDML estimators based on one-step estimation, targeted minimum loss-based estimation, and the method of sieves.
For data-driven model selection, we derive a novel decomposition of model approximation error for smooth functionals of M-estimands.
arXiv Detail & Related papers (2025-01-21T03:50:51Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Should We Learn Most Likely Functions or Parameters? [51.133793272222874]
We investigate the benefits and drawbacks of directly estimating the most likely function implied by the model and the data.
We find that function-space MAP estimation can lead to flatter minima, better generalization, and improved to overfitting.
arXiv Detail & Related papers (2023-11-27T16:39:55Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Optimizing model-agnostic Random Subspace ensembles [5.680512932725364]
We present a model-agnostic ensemble approach for supervised learning.
The proposed approach alternates between learning an ensemble of models using a parametric version of the Random Subspace approach.
We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets.
arXiv Detail & Related papers (2021-09-07T13:58:23Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - On Misspecification in Prediction Problems and Robustness via Improper
Learning [23.64462813525688]
We show that for a broad class of loss functions and parametric families of distributions, the regret of playing a "proper" predictor has lower bound scaling at least as $sqrtgamma n$.
We exhibit instances in which this is unimprovable even over the family of all learners that may play distributions in the convex hull of the parametric family.
arXiv Detail & Related papers (2021-01-13T17:54:08Z) - A new method for parameter estimation in probabilistic models: Minimum
probability flow [26.25482738732648]
We propose a new parameter fitting method, Minimum Probability Flow (MPF), which is applicable to any parametric model.
We demonstrate parameter estimation using MPF in two cases: a continuous state space model, and an Ising spin glass.
arXiv Detail & Related papers (2020-07-17T21:19:44Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.