Context-aware surrogate modeling for balancing approximation and
sampling costs in multi-fidelity importance sampling and Bayesian inverse
problems
- URL: http://arxiv.org/abs/2010.11708v2
- Date: Sun, 12 Sep 2021 16:44:46 GMT
- Title: Context-aware surrogate modeling for balancing approximation and
sampling costs in multi-fidelity importance sampling and Bayesian inverse
problems
- Authors: Terrence Alsup and Benjamin Peherstorfer
- Abstract summary: Multi-fidelity methods leverage low-cost surrogate models to speed up computations.
Because surrogate and high-fidelity models are used together, poor predictions by surrogate models can be compensated with frequent recourse to high-fidelity models.
This work considers multi-fidelity importance sampling and theoretically and computationally trades off increasing the fidelity of surrogate models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-fidelity methods leverage low-cost surrogate models to speed up
computations and make occasional recourse to expensive high-fidelity models to
establish accuracy guarantees. Because surrogate and high-fidelity models are
used together, poor predictions by surrogate models can be compensated with
frequent recourse to high-fidelity models. Thus, there is a trade-off between
investing computational resources to improve the accuracy of surrogate models
versus simply making more frequent recourse to expensive high-fidelity models;
however, this trade-off is ignored by traditional modeling methods that
construct surrogate models that are meant to replace high-fidelity models
rather than being used together with high-fidelity models. This work considers
multi-fidelity importance sampling and theoretically and computationally trades
off increasing the fidelity of surrogate models for constructing more accurate
biasing densities and the numbers of samples that are required from the
high-fidelity models to compensate poor biasing densities. Numerical examples
demonstrate that such context-aware surrogate models for multi-fidelity
importance sampling have lower fidelity than what typically is set as tolerance
in traditional model reduction, leading to runtime speedups of up to one order
of magnitude in the presented examples.
Related papers
- Practical multi-fidelity machine learning: fusion of deterministic and Bayesian models [0.34592277400656235]
Multi-fidelity machine learning methods integrate scarce, resource-intensive high-fidelity data with abundant but less accurate low-fidelity data.
We propose a practical multi-fidelity strategy for problems spanning low- and high-dimensional domains.
arXiv Detail & Related papers (2024-07-21T10:40:50Z) - Provable Statistical Rates for Consistency Diffusion Models [87.28777947976573]
Despite the state-of-the-art performance, diffusion models are known for their slow sample generation due to the extensive number of steps involved.
This paper contributes towards the first statistical theory for consistency models, formulating their training as a distribution discrepancy minimization problem.
arXiv Detail & Related papers (2024-06-23T20:34:18Z) - Uncertainty-aware multi-fidelity surrogate modeling with noisy data [0.0]
In real-world applications, uncertainty is present in both high- and low-fidelity models due to measurement or numerical noise.
This paper introduces a comprehensive framework for multi-fidelity surrogate modeling that handles noise-contaminated data.
The proposed framework offers a natural approach to combining physical experiments and computational models.
arXiv Detail & Related papers (2024-01-12T08:37:41Z) - General multi-fidelity surrogate models: Framework and active learning
strategies for efficient rare event simulation [1.708673732699217]
Estimating the probability of failure for complex real-world systems is often prohibitively expensive.
This paper presents a robust multi-fidelity surrogate modeling strategy.
It is shown to be highly accurate while drastically reducing the number of high-fidelity model calls.
arXiv Detail & Related papers (2022-12-07T00:03:21Z) - Context-aware learning of hierarchies of low-fidelity models for
multi-fidelity uncertainty quantification [0.0]
Multi-fidelity Monte Carlo methods leverage low-fidelity and surrogate models for variance reduction to make tractable uncertainty quantification.
This work proposes a context-aware multi-fidelity Monte Carlo method that optimally balances the costs of training low-fidelity models with the costs of Monte Carlo sampling.
arXiv Detail & Related papers (2022-11-20T01:12:51Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - Multi-fidelity regression using artificial neural networks: efficient
approximation of parameter-dependent output quantities [0.17499351967216337]
We present the use of artificial neural networks applied to multi-fidelity regression problems.
The introduced models are compared against a traditional multi-fidelity scheme, co-kriging.
We also show an application of multi-fidelity regression to an engineering problem.
arXiv Detail & Related papers (2021-02-26T11:29:00Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - When Ensembling Smaller Models is More Efficient than Single Large
Models [52.38997176317532]
We show that ensembles can outperform single models with both higher accuracy and requiring fewer total FLOPs to compute.
This presents an interesting observation that output diversity in ensembling can often be more efficient than training larger models.
arXiv Detail & Related papers (2020-05-01T18:56:18Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.