Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
- URL: http://arxiv.org/abs/2106.13361v1
- Date: Fri, 25 Jun 2021 00:19:19 GMT
- Title: Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
- Authors: Michael Penwarden, Shandian Zhe, Akil Narayan, Robert M. Kirby
- Abstract summary: Physics-informed Neural Networks (PINNs) are candidates for multifidelity simulation approaches.
We propose a particular multifidelity approach applied to PINNs that exploits low-rank structure.
- Score: 13.590496719224987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multifidelity simulation methodologies are often used in an attempt to
judiciously combine low-fidelity and high-fidelity simulation results in an
accuracy-increasing, cost-saving way. Candidates for this approach are
simulation methodologies for which there are fidelity differences connected
with significant computational cost differences. Physics-informed Neural
Networks (PINNs) are candidates for these types of approaches due to the
significant difference in training times required when different fidelities
(expressed in terms of architecture width and depth as well as optimization
criteria) are employed. In this paper, we propose a particular multifidelity
approach applied to PINNs that exploits low-rank structure. We demonstrate that
width, depth, and optimization criteria can be used as parameters related to
model fidelity, and show numerical justification of cost differences in
training due to fidelity parameter choices. We test our multifidelity scheme on
various canonical forward PDE models that have been presented in the emerging
PINNs literature.
Related papers
- Multi-Fidelity Residual Neural Processes for Scalable Surrogate Modeling [19.60087366873302]
Multi-fidelity surrogate modeling aims to learn an accurate surrogate at the highest fidelity level.
Deep learning approaches utilize neural network based encoders and decoders to improve scalability.
We propose Multi-fidelity Residual Neural Processes (MFRNP), a novel multi-fidelity surrogate modeling framework.
arXiv Detail & Related papers (2024-02-29T04:40:25Z) - Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural
Networks [24.14254861023394]
In this study, we suggest a path that potentially opens up a possibility for physics-informed neural networks (PINNs) to be considered as one such solver.
PINNs have pioneered a proper integration of deep-learning and scientific computing, but they require repetitive time-consuming training of neural networks.
We propose a lightweight low-rank PINNs containing only hundreds of model parameters and an associated hypernetwork-based meta-learning algorithm.
arXiv Detail & Related papers (2023-10-14T08:13:43Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Transfer Learning on Multi-Fidelity Data [0.0]
Neural networks (NNs) are often used as surrogates or emulators of partial differential equations (PDEs) that describe the dynamics of complex systems.
We rely on multi-fidelity simulations to reduce the cost of data generation for subsequent training of a deep convolutional NN (CNN) using transfer learning.
Our numerical experiments demonstrate that a mixture of a comparatively large number of low-fidelity data and smaller numbers of high- and low-fidelity data provides an optimal balance of computational speed-up and prediction accuracy.
arXiv Detail & Related papers (2021-04-29T00:06:19Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Multi-fidelity modeling with different input domain definitions using
Deep Gaussian Processes [0.0]
Multi-fidelity approaches combine different models built on a scarce but accurate data-set (high-fidelity data-set), and a large but approximate one (low-fidelity data-set)
Deep Gaussian Processes (DGPs) that are functional compositions of GPs have also been adapted to multi-fidelity using the Multi-Fidelity Deep Gaussian process model (MF-DGP)
arXiv Detail & Related papers (2020-06-29T10:44:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.