Multifidelity Cross-validation
- URL: http://arxiv.org/abs/2407.01495v1
- Date: Mon, 1 Jul 2024 17:42:11 GMT
- Title: Multifidelity Cross-validation
- Authors: S. Ashwin Renganathan, Kade Carlson,
- Abstract summary: We are interested in emulating quantities of interest observed from models of a system at multiple fidelities.
We propose a novel method to actively learn the surrogate model via leave-one-out cross-validation (LOO-CV)
We demonstrate the utility of our method on several synthetic test problems as well as on the thermal stress analysis of a gas turbine blade.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emulating the mapping between quantities of interest and their control parameters using surrogate models finds widespread application in engineering design, including in numerical optimization and uncertainty quantification. Gaussian process models can serve as a probabilistic surrogate model of unknown functions, thereby making them highly suitable for engineering design and decision-making in the presence of uncertainty. In this work, we are interested in emulating quantities of interest observed from models of a system at multiple fidelities, which trade accuracy for computational efficiency. Using multifidelity Gaussian process models, to efficiently fuse models at multiple fidelities, we propose a novel method to actively learn the surrogate model via leave-one-out cross-validation (LOO-CV). Our proposed multifidelity cross-validation (\texttt{MFCV}) approach develops an adaptive approach to reduce the LOO-CV error at the target (highest) fidelity, by learning the correlations between the LOO-CV at all fidelities. \texttt{MFCV} develops a two-step lookahead policy to select optimal input-fidelity pairs, both in sequence and in batches, both for continuous and discrete fidelity spaces. We demonstrate the utility of our method on several synthetic test problems as well as on the thermal stress analysis of a gas turbine blade.
Related papers
- MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Diffusion-Generative Multi-Fidelity Learning for Physical Simulation [24.723536390322582]
We develop a diffusion-generative multi-fidelity learning method based on differential equations (SDE), where the generation is a continuous denoising process.
By conditioning on additional inputs (temporal or spacial variables), our model can efficiently learn and predict multi-dimensional solution arrays.
arXiv Detail & Related papers (2023-11-09T18:59:05Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Disentangled Multi-Fidelity Deep Bayesian Active Learning [19.031567953748453]
Multi-fidelity active learning aims to learn a direct mapping from input parameters to simulation outputs at the highest fidelity.
Deep learning-based methods often impose a hierarchical structure in hidden representations, which only supports passing information from low-fidelity to high-fidelity.
We propose a novel framework called Disentangled Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), which learns the surrogate models conditioned on the distribution of functions at multiple fidelities.
arXiv Detail & Related papers (2023-05-07T23:14:58Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Adaptive Reliability Analysis for Multi-fidelity Models using a
Collective Learning Strategy [6.368679897630892]
This study presents a new approach called adaptive multi-fidelity Gaussian process for reliability analysis (AMGPRA)
It is shown that the proposed method achieves similar or higher accuracy with reduced computational costs compared to state-of-the-art single and multi-fidelity methods.
A key application of AMGPRA is high-fidelity fragility modeling using complex and costly physics-based computational models.
arXiv Detail & Related papers (2021-09-21T14:42:58Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Active Learning with Multifidelity Modeling for Efficient Rare Event
Simulation [0.0]
We propose a framework for active learning with multifidelity modeling emphasizing the efficient estimation of rare events.
Our framework works by fusing a low-fidelity (LF) prediction with an HF-inferred correction, filtering the corrected LF prediction to decide whether to call the high-fidelity model.
For improved robustness when estimating smaller failure probabilities, we propose using dynamic active learning functions that decide when to call the HF model.
arXiv Detail & Related papers (2021-06-25T17:44:28Z) - Model Selection for Bayesian Autoencoders [25.619565817793422]
We propose to optimize the distributional sliced-Wasserstein distance between the output of the autoencoder and the empirical data distribution.
We turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space.
We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results.
arXiv Detail & Related papers (2021-06-11T08:55:00Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.