Disentangled Multi-Fidelity Deep Bayesian Active Learning
- URL: http://arxiv.org/abs/2305.04392v3
- Date: Mon, 5 Jun 2023 02:40:42 GMT
- Title: Disentangled Multi-Fidelity Deep Bayesian Active Learning
- Authors: Dongxia Wu, Ruijia Niu, Matteo Chinazzi, Yian Ma, Rose Yu
- Abstract summary: Multi-fidelity active learning aims to learn a direct mapping from input parameters to simulation outputs at the highest fidelity.
Deep learning-based methods often impose a hierarchical structure in hidden representations, which only supports passing information from low-fidelity to high-fidelity.
We propose a novel framework called Disentangled Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), which learns the surrogate models conditioned on the distribution of functions at multiple fidelities.
- Score: 19.031567953748453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To balance quality and cost, various domain areas of science and engineering
run simulations at multiple levels of sophistication. Multi-fidelity active
learning aims to learn a direct mapping from input parameters to simulation
outputs at the highest fidelity by actively acquiring data from multiple
fidelity levels. However, existing approaches based on Gaussian processes are
hardly scalable to high-dimensional data. Deep learning-based methods often
impose a hierarchical structure in hidden representations, which only supports
passing information from low-fidelity to high-fidelity. These approaches can
lead to the undesirable propagation of errors from low-fidelity representations
to high-fidelity ones. We propose a novel framework called Disentangled
Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), which learns the
surrogate models conditioned on the distribution of functions at multiple
fidelities. On benchmark tasks of learning deep surrogates of partial
differential equations including heat equation, Poisson's equation and fluid
simulations, our approach significantly outperforms state-of-the-art in
prediction accuracy and sample efficiency.
Related papers
- Practical multi-fidelity machine learning: fusion of deterministic and Bayesian models [0.34592277400656235]
Multi-fidelity machine learning methods integrate scarce, resource-intensive high-fidelity data with abundant but less accurate low-fidelity data.
We propose a practical multi-fidelity strategy for problems spanning low- and high-dimensional domains.
arXiv Detail & Related papers (2024-07-21T10:40:50Z) - PiRD: Physics-informed Residual Diffusion for Flow Field Reconstruction [5.06136344261226]
CNN-based methods for data fidelity enhancement rely on low-fidelity data patterns and distributions during the training phase.
Our proposed model - Physics-informed Residual Diffusion - demonstrates the capability to elevate the quality of data from both standard low-fidelity inputs.
Experimental results have shown that our approach can effectively reconstruct high-quality outcomes for two-dimensional turbulent flows without requiring retraining.
arXiv Detail & Related papers (2024-04-12T11:45:51Z) - Multi-Fidelity Residual Neural Processes for Scalable Surrogate Modeling [19.60087366873302]
Multi-fidelity surrogate modeling aims to learn an accurate surrogate at the highest fidelity level.
Deep learning approaches utilize neural network based encoders and decoders to improve scalability.
We propose Multi-fidelity Residual Neural Processes (MFRNP), a novel multi-fidelity surrogate modeling framework.
arXiv Detail & Related papers (2024-02-29T04:40:25Z) - Diffusion-Generative Multi-Fidelity Learning for Physical Simulation [24.723536390322582]
We develop a diffusion-generative multi-fidelity learning method based on differential equations (SDE), where the generation is a continuous denoising process.
By conditioning on additional inputs (temporal or spacial variables), our model can efficiently learn and predict multi-dimensional solution arrays.
arXiv Detail & Related papers (2023-11-09T18:59:05Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Infinite-Fidelity Coregionalization for Physical Simulation [22.524773932668023]
Multi-fidelity modeling and learning are important in physical simulation-related applications.
We propose Infinite Fidelity Coregionalization (IFC) to exploit rich information within continuous, infinite fidelities.
We show the advantage of our method in several benchmark tasks in computational physics.
arXiv Detail & Related papers (2022-07-01T23:01:10Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - IQ-Learn: Inverse soft-Q Learning for Imitation [95.06031307730245]
imitation learning from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics.
Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence.
We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function.
arXiv Detail & Related papers (2021-06-23T03:43:10Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.