Bifidelity Karhunen-Loève Expansion Surrogate with Active Learning for Random Fields
- URL: http://arxiv.org/abs/2511.03756v1
- Date: Wed, 05 Nov 2025 04:14:44 GMT
- Title: Bifidelity Karhunen-Loève Expansion Surrogate with Active Learning for Random Fields
- Authors: Aniket Jivani, Cosmin Safta, Beckett Y. Zhou, Xun Huan,
- Abstract summary: We present a bifidelity Karhunen-Loeve expansion (KLE) surrogate model for field-valued quantities of interest (QoIs) under uncertain inputs.<n>We form an active learning strategy that adaptively selects new HF evaluations based on the surrogate's generalization error.<n>New HF samples are then acquired by maximizing an expected improvement criterion, targeting regions of high surrogate error.
- Score: 0.4899818550820576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a bifidelity Karhunen-Lo\`eve expansion (KLE) surrogate model for field-valued quantities of interest (QoIs) under uncertain inputs. The approach combines the spectral efficiency of the KLE with polynomial chaos expansions (PCEs) to preserve an explicit mapping between input uncertainties and output fields. By coupling inexpensive low-fidelity (LF) simulations that capture dominant response trends with a limited number of high-fidelity (HF) simulations that correct for systematic bias, the proposed method enables accurate and computationally affordable surrogate construction. To further improve surrogate accuracy, we form an active learning strategy that adaptively selects new HF evaluations based on the surrogate's generalization error, estimated via cross-validation and modeled using Gaussian process regression. New HF samples are then acquired by maximizing an expected improvement criterion, targeting regions of high surrogate error. The resulting BF-KLE-AL framework is demonstrated on three examples of increasing complexity: a one-dimensional analytical benchmark, a two-dimensional convection-diffusion system, and a three-dimensional turbulent round jet simulation based on Reynolds-averaged Navier--Stokes (RANS) and enhanced delayed detached-eddy simulations (EDDES). Across these cases, the method achieves consistent improvements in predictive accuracy and sample efficiency relative to single-fidelity and random-sampling approaches.
Related papers
- LFPO: Likelihood-Free Policy Optimization for Masked Diffusion Models [48.68246945083386]
Likelihood-Free Policy Optimization (LFPO) is a native framework that maps the concept of vector field flow matching to the discrete token space.<n>LFPO formulates alignment as geometric velocity rectification, which directly optimize denoising logits via contrastive updates.<n>Experiments demonstrate that LFPO not only outperforms state-of-the-art baselines on code and reasoning benchmarks but also accelerates inference by approximately 20% through reduced diffusion steps.
arXiv Detail & Related papers (2026-03-02T07:42:55Z) - Sharp Convergence Rates for Masked Diffusion Models [53.117058231393834]
We develop a total-variation based analysis for the Euler method that overcomes limitations.<n>Our results relax assumptions on score estimation, improve parameter dependencies, and establish convergence guarantees.<n>Overall, our analysis introduces a direct TV-based error decomposition along the CTMC trajectory and a decoupling-based path-wise analysis for FHS.
arXiv Detail & Related papers (2026-02-26T00:47:51Z) - Function-Space Decoupled Diffusion for Forward and Inverse Modeling in Carbon Capture and Storage [65.51149575007149]
We present Fun-DDPS, a generative framework that combines function-space diffusion models with differentiable neural operator surrogates for both forward and inverse modeling.<n>Fun-DDPS produces physically consistent realizations free from the high-frequency artifacts observed in joint-state baselines.
arXiv Detail & Related papers (2026-02-12T18:58:12Z) - G$^2$RPO: Granular GRPO for Precise Reward in Flow Models [74.21206048155669]
We propose a novel Granular-GRPO (G$2$RPO) framework that achieves precise and comprehensive reward assessments of sampling directions.<n>We introduce a Multi-Granularity Advantage Integration module that aggregates advantages computed at multiple diffusion scales.<n>Our G$2$RPO significantly outperforms existing flow-based GRPO baselines.
arXiv Detail & Related papers (2025-10-02T12:57:12Z) - Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - Variational Autoencoder for Generating Broader-Spectrum prior Proposals in Markov chain Monte Carlo Methods [0.0]
This study uses a Variational Autoencoder method to enhance the efficiency and applicability of Markov Chain Monte Carlo (McMC) methods.<n>The VAE framework enables a data-driven approach to flexibly capture a broader range of correlation structures in inverse problems.
arXiv Detail & Related papers (2025-06-16T14:11:16Z) - On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of physiologic boundary conditions [0.7499722271664147]
We focus on Bayesian parameter estimation and explore different methods to reduce the computational cost of sampling from the posterior distribution.<n>A common approach is to construct a surrogate model for the high-fidelity simulation itself.<n>A third possible approach is to treat the discrepancy between the high-fidelity and surrogate models as random noise and estimate its distribution using normalizing flows.
arXiv Detail & Related papers (2025-06-13T11:20:49Z) - Solving Inverse Problems with FLAIR [68.87167940623318]
We present FLAIR, a training-free variational framework that leverages flow-based generative models as prior for inverse problems.<n>Results on standard imaging benchmarks demonstrate that FLAIR consistently outperforms existing diffusion- and flow-based methods in terms of reconstruction quality and sample diversity.
arXiv Detail & Related papers (2025-06-03T09:29:47Z) - Muti-Fidelity Prediction and Uncertainty Quantification with Laplace Neural Operators for Parametric Partial Differential Equations [6.03891813540831]
Laplace Neural Operators (LNOs) have emerged as a promising approach in scientific machine learning.<n>We propose multi-fidelity Laplace Neural Operators (MF-LNOs), which combine a low-fidelity base model with parallel linear/nonlinear HF correctors and dynamic inter-fidelity weighting.<n>This allows us to exploit correlations between LF and HF datasets and achieve accurate inference of quantities of interest.
arXiv Detail & Related papers (2025-02-01T20:38:50Z) - Multifidelity Cross-validation [0.0]
We are interested in emulating quantities of interest observed from models of a system at multiple fidelities.
We propose a novel method to actively learn the surrogate model via leave-one-out cross-validation (LOO-CV)
We demonstrate the utility of our method on several synthetic test problems as well as on the thermal stress analysis of a gas turbine blade.
arXiv Detail & Related papers (2024-07-01T17:42:11Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Conditional Korhunen-Lo\'{e}ve regression model with Basis Adaptation
for high-dimensional problems: uncertainty quantification and inverse
modeling [62.997667081978825]
We propose a methodology for improving the accuracy of surrogate models of the observable response of physical systems.
We apply the proposed methodology to constructing surrogate models via the Basis Adaptation (BA) method of the stationary hydraulic head response.
arXiv Detail & Related papers (2023-07-05T18:14:38Z) - Generalised Latent Assimilation in Heterogeneous Reduced Spaces with
Machine Learning Surrogate Models [10.410970649045943]
We develop a system which combines reduced-order surrogate models with a novel data assimilation technique.
Generalised Latent Assimilation can benefit both the efficiency provided by the reduced-order modelling and the accuracy of data assimilation.
arXiv Detail & Related papers (2022-04-07T15:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.