Multi-Fidelity High-Order Gaussian Processes for Physical Simulation
- URL: http://arxiv.org/abs/2006.04972v1
- Date: Mon, 8 Jun 2020 22:31:59 GMT
- Title: Multi-Fidelity High-Order Gaussian Processes for Physical Simulation
- Authors: Zheng Wang, Wei Xing, Robert Kirby, Shandian Zhe
- Abstract summary: High-fidelity partial differential equations (PDEs) are more expensive than low-fidelity ones.
We propose Multi-Fidelity High-Order Gaussian Process (MFHoGP) that can capture complex correlations.
MFHoGP propagates bases throughout fidelities to fuse information, and places a deep matrix GP prior over the basis weights.
- Score: 24.033468062984458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The key task of physical simulation is to solve partial differential
equations (PDEs) on discretized domains, which is known to be costly. In
particular, high-fidelity solutions are much more expensive than low-fidelity
ones. To reduce the cost, we consider novel Gaussian process (GP) models that
leverage simulation examples of different fidelities to predict
high-dimensional PDE solution outputs. Existing GP methods are either not
scalable to high-dimensional outputs or lack effective strategies to integrate
multi-fidelity examples. To address these issues, we propose Multi-Fidelity
High-Order Gaussian Process (MFHoGP) that can capture complex correlations both
between the outputs and between the fidelities to enhance solution estimation,
and scale to large numbers of outputs. Based on a novel nonlinear
coregionalization model, MFHoGP propagates bases throughout fidelities to fuse
information, and places a deep matrix GP prior over the basis weights to
capture the (nonlinear) relationships across the fidelities. To improve
inference efficiency and quality, we use bases decomposition to largely reduce
the model parameters, and layer-wise matrix Gaussian posteriors to capture the
posterior dependency and to simplify the computation. Our stochastic
variational learning algorithm successfully handles millions of outputs without
extra sparse approximations. We show the advantages of our method in several
typical applications.
Related papers
- Compact Multi-Threshold Quantum Information Driven Ansatz For Strongly Interactive Lattice Spin Models [0.0]
We introduce a systematic procedure for ansatz building based on approximate Quantum Mutual Information (QMI)
Our approach generates a layered-structured ansatz, where each layer's qubit pairs are selected based on their QMI values, resulting in more efficient state preparation and optimization routines.
Our results show that the Multi-QIDA method reduces the computational complexity while maintaining high precision, making it a promising tool for quantum simulations in lattice spin models.
arXiv Detail & Related papers (2024-08-05T17:07:08Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Gradient-enhanced deep Gaussian processes for multifidelity modelling [0.0]
Multifidelity models integrate data from multiple sources to produce a single approximator for the underlying process.
Deep Gaussian processes (GPs) are attractive for multifidelity modelling as they are non-parametric, robust to overfitting, perform well for small datasets.
arXiv Detail & Related papers (2024-02-25T11:08:19Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - RMFGP: Rotated Multi-fidelity Gaussian process with Dimension Reduction
for High-dimensional Uncertainty Quantification [12.826754199680474]
Multi-fidelity modelling enables accurate inference even when only a small set of accurate data is available.
By combining the realizations of the high-fidelity model with one or more low-fidelity models, the multi-fidelity method can make accurate predictions of quantities of interest.
This paper proposes a new dimension reduction framework based on rotated multi-fidelity Gaussian process regression and a Bayesian active learning scheme.
arXiv Detail & Related papers (2022-04-11T01:20:35Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Multi-fidelity modeling with different input domain definitions using
Deep Gaussian Processes [0.0]
Multi-fidelity approaches combine different models built on a scarce but accurate data-set (high-fidelity data-set), and a large but approximate one (low-fidelity data-set)
Deep Gaussian Processes (DGPs) that are functional compositions of GPs have also been adapted to multi-fidelity using the Multi-Fidelity Deep Gaussian process model (MF-DGP)
arXiv Detail & Related papers (2020-06-29T10:44:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.