Data-Driven Joint Inversions for PDE Models
- URL: http://arxiv.org/abs/2210.09228v1
- Date: Mon, 17 Oct 2022 16:21:45 GMT
- Title: Data-Driven Joint Inversions for PDE Models
- Authors: Kui Ren, Lu Zhang
- Abstract summary: We propose an integrated data-driven and model-based iterative reconstruction framework for such joint inversion problems.
Our method couples the supplementary data with the PDE model to make the data-driven modeling process consistent with the model-based reconstruction procedure.
- Score: 24.162935839841317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of simultaneously reconstructing multiple physical coefficients in
partial differential equations from observed data is ubiquitous in
applications. In this work, we propose an integrated data-driven and
model-based iterative reconstruction framework for such joint inversion
problems where additional data on the unknown coefficients are supplemented for
better reconstructions. Our method couples the supplementary data with the PDE
model to make the data-driven modeling process consistent with the model-based
reconstruction procedure. We characterize the impact of learning uncertainty on
the joint inversion results for two typical model inverse problems. Numerical
evidences are provided to demonstrate the feasibility of using data-driven
models to improve joint inversion of physical models.
Related papers
- CoCoGen: Physically-Consistent and Conditioned Score-based Generative Models for Forward and Inverse Problems [1.0923877073891446]
This work extends the reach of generative models into physical problem domains.
We present an efficient approach to promote consistency with the underlying PDE.
We showcase the potential and versatility of score-based generative models in various physics tasks.
arXiv Detail & Related papers (2023-12-16T19:56:10Z) - Joint Distributional Learning via Cramer-Wold Distance [0.7614628596146602]
We introduce the Cramer-Wold distance regularization, which can be computed in a closed-form, to facilitate joint distributional learning for high-dimensional datasets.
We also introduce a two-step learning method to enable flexible prior modeling and improve the alignment between the aggregated posterior and the prior distribution.
arXiv Detail & Related papers (2023-10-25T05:24:23Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - A Neural RDE-based model for solving path-dependent PDEs [5.6293920097580665]
The concept of the path-dependent partial differential equation (PPDE) was first introduced in the context of path-dependent derivatives in financial markets.
Compared to the classical PDE, the solution of a PPDE involves an infinite-dimensional spatial variable.
We propose a rough neural differential equation (NRDE)-based model to learn PPDEs, which effectively encodes the path information through the log-signature feature.
arXiv Detail & Related papers (2023-06-01T20:19:41Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Low-rank statistical finite elements for scalable model-data synthesis [0.8602553195689513]
statFEM acknowledges a priori model misspecification, by embedding forcing within the governing equations.
The method reconstructs the observed data-generating processes with minimal loss of information.
This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix.
arXiv Detail & Related papers (2021-09-10T09:51:43Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.