Infinite-Fidelity Coregionalization for Physical Simulation
- URL: http://arxiv.org/abs/2207.00678v1
- Date: Fri, 1 Jul 2022 23:01:10 GMT
- Title: Infinite-Fidelity Coregionalization for Physical Simulation
- Authors: Shibo Li, Zheng Wang, Robert M. Kirby, Shandian Zhe
- Abstract summary: Multi-fidelity modeling and learning are important in physical simulation-related applications.
We propose Infinite Fidelity Coregionalization (IFC) to exploit rich information within continuous, infinite fidelities.
We show the advantage of our method in several benchmark tasks in computational physics.
- Score: 22.524773932668023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-fidelity modeling and learning are important in physical
simulation-related applications. It can leverage both low-fidelity and
high-fidelity examples for training so as to reduce the cost of data generation
while still achieving good performance. While existing approaches only model
finite, discrete fidelities, in practice, the fidelity choice is often
continuous and infinite, which can correspond to a continuous mesh spacing or
finite element length. In this paper, we propose Infinite Fidelity
Coregionalization (IFC). Given the data, our method can extract and exploit
rich information within continuous, infinite fidelities to bolster the
prediction accuracy. Our model can interpolate and/or extrapolate the
predictions to novel fidelities, which can be even higher than the fidelities
of training data. Specifically, we introduce a low-dimensional latent output as
a continuous function of the fidelity and input, and multiple it with a basis
matrix to predict high-dimensional solution outputs. We model the latent output
as a neural Ordinary Differential Equation (ODE) to capture the complex
relationships within and integrate information throughout the continuous
fidelities. We then use Gaussian processes or another ODE to estimate the
fidelity-varying bases. For efficient inference, we reorganize the bases as a
tensor, and use a tensor-Gaussian variational posterior to develop a scalable
inference algorithm for massive outputs. We show the advantage of our method in
several benchmark tasks in computational physics.
Related papers
- Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Diffusion-Generative Multi-Fidelity Learning for Physical Simulation [24.723536390322582]
We develop a diffusion-generative multi-fidelity learning method based on differential equations (SDE), where the generation is a continuous denoising process.
By conditioning on additional inputs (temporal or spacial variables), our model can efficiently learn and predict multi-dimensional solution arrays.
arXiv Detail & Related papers (2023-11-09T18:59:05Z) - Generative Learning of Continuous Data by Tensor Networks [45.49160369119449]
We introduce a new family of tensor network generative models for continuous data.
We benchmark the performance of this model on several synthetic and real-world datasets.
Our methods give important theoretical and empirical evidence of the efficacy of quantum-inspired methods for the rapidly growing field of generative learning.
arXiv Detail & Related papers (2023-10-31T14:37:37Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Disentangled Multi-Fidelity Deep Bayesian Active Learning [19.031567953748453]
Multi-fidelity active learning aims to learn a direct mapping from input parameters to simulation outputs at the highest fidelity.
Deep learning-based methods often impose a hierarchical structure in hidden representations, which only supports passing information from low-fidelity to high-fidelity.
We propose a novel framework called Disentangled Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), which learns the surrogate models conditioned on the distribution of functions at multiple fidelities.
arXiv Detail & Related papers (2023-05-07T23:14:58Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Deep Multi-Fidelity Active Learning of High-dimensional Outputs [17.370056935194786]
We develop a deep neural network-based multi-fidelity model for learning with high-dimensional outputs.
We then propose a mutual information-based acquisition function that extends the predictive entropy principle.
We show the advantage of our method in several applications of computational physics and engineering design.
arXiv Detail & Related papers (2020-12-02T00:02:31Z) - Physics Informed Deep Kernel Learning [24.033468062984458]
Physics Informed Deep Kernel Learning (PI-DKL) exploits physics knowledge represented by differential equations with latent sources.
For efficient and effective inference, we marginalize out the latent variables and derive a collapsed model evidence lower bound (ELBO)
arXiv Detail & Related papers (2020-06-08T22:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.