Multifidelity Deep Operator Networks For Data-Driven and
Physics-Informed Problems
- URL: http://arxiv.org/abs/2204.09157v2
- Date: Tue, 21 Nov 2023 05:06:33 GMT
- Title: Multifidelity Deep Operator Networks For Data-Driven and
Physics-Informed Problems
- Authors: Amanda A. Howard, Mauro Perego, George E. Karniadakis, Panos Stinis
- Abstract summary: We present a composite Deep Operator Network (DeepONet) for learning using two datasets with different levels of fidelity.
We demonstrate the new multi-fidelity training in diverse examples, including modeling of the ice-sheet dynamics of the Humboldt glacier, Greenland.
- Score: 0.9999629695552196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Operator learning for complex nonlinear systems is increasingly common in
modeling multi-physics and multi-scale systems. However, training such
high-dimensional operators requires a large amount of expensive, high-fidelity
data, either from experiments or simulations. In this work, we present a
composite Deep Operator Network (DeepONet) for learning using two datasets with
different levels of fidelity to accurately learn complex operators when
sufficient high-fidelity data is not available. Additionally, we demonstrate
that the presence of low-fidelity data can improve the predictions of
physics-informed learning with DeepONets. We demonstrate the new multi-fidelity
training in diverse examples, including modeling of the ice-sheet dynamics of
the Humboldt glacier, Greenland, using two different fidelity models and also
using the same physical model at two different resolutions.
Related papers
- Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Knowledge-based Deep Learning for Modeling Chaotic Systems [7.075125892721573]
This paper considers extreme events and their dynamics and proposes models based on deep neural networks, called knowledge-based deep learning (KDL)
Our proposed KDL can learn the complex patterns governing chaotic systems by jointly training on real and simulated data.
We validate our model by assessing it on three real-world benchmark datasets: El Nino sea surface temperature, San Juan Dengue viral infection, and Bjornoya daily precipitation.
arXiv Detail & Related papers (2022-09-09T11:46:25Z) - Multi-fidelity wavelet neural operator with application to uncertainty
quantification [0.0]
We develop a new framework based on the wavelet neural operator which is capable of learning from a multi-fidelity dataset.
The framework's excellent learning capabilities are demonstrated by solving different problems.
arXiv Detail & Related papers (2022-08-11T02:03:30Z) - Multifidelity deep neural operators for efficient learning of partial
differential equations with application to fast inverse design of nanoscale
heat transport [2.512625172084287]
We develop a multifidelity neural operator based on a deep operator network (DeepONet)
A multifidelity DeepONet significantly reduces the required amount of high-fidelity data and achieves one order of magnitude smaller error when using the same amount of high-fidelity data.
We apply a multifidelity DeepONet to learn the phonon Boltzmann transport equation (BTE), a framework to compute nanoscale heat transport.
arXiv Detail & Related papers (2022-04-14T01:01:24Z) - Bi-fidelity Modeling of Uncertain and Partially Unknown Systems using
DeepONets [0.0]
We propose a bi-fidelity modeling approach for complex physical systems.
We model the discrepancy between the true system's response and low-fidelity response in the presence of a small training dataset.
We apply the approach to model systems that have parametric uncertainty and are partially unknown.
arXiv Detail & Related papers (2022-04-03T05:30:57Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - Multi-Robot Deep Reinforcement Learning for Mobile Navigation [82.62621210336881]
We propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt)
At training time, HInt learns separate perception and dynamics models, and at test time, HInt integrates the two models in a hierarchical manner and plans actions with the integrated model.
Our mobile navigation experiments show that HInt outperforms conventional hierarchical policies and single-source approaches.
arXiv Detail & Related papers (2021-06-24T19:07:40Z) - Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face
Learning [54.13876727413492]
In many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID.
With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a a long-tail face learning.
Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST)
MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of
arXiv Detail & Related papers (2021-05-10T04:57:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.