Deep neural network enabled corrective source term approach to hybrid
analysis and modeling
- URL: http://arxiv.org/abs/2105.11521v1
- Date: Mon, 24 May 2021 20:17:13 GMT
- Title: Deep neural network enabled corrective source term approach to hybrid
analysis and modeling
- Authors: Sindre Stenen Blakseth and Adil Rasheed and Trond Kvamsdal and Omer
San
- Abstract summary: Hybrid Analysis and Modeling (HAM) is an emerging modeling paradigm which aims to combine physics-based modeling and data-driven modeling.
We introduce, justify and demonstrate a novel approach to HAM -- the Corrective Source Term Approach (CoSTA)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hybrid Analysis and Modeling (HAM) is an emerging modeling paradigm which
aims to combine physics-based modeling (PBM) and data-driven modeling (DDM) to
create generalizable, trustworthy, accurate, computationally efficient and
self-evolving models. Here, we introduce, justify and demonstrate a novel
approach to HAM -- the Corrective Source Term Approach (CoSTA) -- which
augments the governing equation of a PBM model with a corrective source term
generated by a deep neural network (DNN). In a series of numerical experiments
on one-dimensional heat diffusion, CoSTA is generally found to outperform
comparable DDM and PBM models in terms of accuracy -- often reducing predictive
errors by several orders of magnitude -- while also generalizing better than
pure DDM. Due to its flexible but solid theoretical foundation, CoSTA provides
a modular framework for leveraging novel developments within both PBM and DDM,
and due to the interpretability of the DNN-generated source term within the PBM
paradigm, CoSTA can be a potential door-opener for data-driven techniques to
enter high-stakes applications previously reserved for pure PBM.
Related papers
- A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - Bayesian tomography using polynomial chaos expansion and deep generative
networks [0.0]
We present a strategy combining the excellent reconstruction performances of a variational autoencoder (VAE) with the accuracy of PCA-PCE surrogate modeling.
Within the MCMC process, the parametrization of the VAE is leveraged for prior exploration and sample proposals.
arXiv Detail & Related papers (2023-07-09T16:44:37Z) - Deep Generative Modeling with Backward Stochastic Differential Equations [0.0]
This paper proposes a novel deep generative model, called BSDE-Gen, which combines the flexibility of backward differential equations with the power of deep neural networks.
The incorporation of the uncertainty in the generative modeling process makes BSDE-Gen an effective and natural approach for generating high-dimensional data.
arXiv Detail & Related papers (2023-04-08T15:37:38Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - Deep Generative Modeling on Limited Data with Regularization by
Nontransferable Pre-trained Models [32.52492468276371]
We propose regularized deep generative model (Reg-DGM) to reduce the variance of generative modeling with limited data.
Reg-DGM uses a pre-trained model to optimize a weighted sum of a certain divergence and the expectation of an energy function.
Empirically, with various pre-trained feature extractors and a data-dependent energy function, Reg-DGM consistently improves the generation performance of strong DGMs with limited data.
arXiv Detail & Related papers (2022-08-30T10:28:50Z) - Combining physics-based and data-driven techniques for reliable hybrid
analysis and modeling using the corrective source term approach [0.0]
Digital twins, autonomous, and artificial intelligent systems require accurate, interpretable, computationally efficient, and generalizable models.
We show how a hybrid approach combining the best of physics-based modeling and data-driven modeling can result in models which can outperform them both.
arXiv Detail & Related papers (2022-06-07T17:10:58Z) - Bagging, optimized dynamic mode decomposition (BOP-DMD) for robust,
stable forecasting with spatial and temporal uncertainty-quantification [2.741266294612776]
Dynamic mode decomposition (DMD) provides a framework for learning a best-fit linear dynamics model over snapshots of temporal, or-temporal, data.
The majority of DMD algorithms are prone to bias errors from noisy measurements of the dynamics, leading to poor model fits and unstable forecasting capabilities.
The optimized DMD algorithm minimizes the model bias with a variable projection optimization, thus leading to stabilized forecasting capabilities.
arXiv Detail & Related papers (2021-07-22T18:14:20Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.