Dimension-reduced KRnet maps for high-dimensional inverse problems
- URL: http://arxiv.org/abs/2303.00573v1
- Date: Wed, 1 Mar 2023 15:16:27 GMT
- Title: Dimension-reduced KRnet maps for high-dimensional inverse problems
- Authors: Yani Feng, Kejun Tang, Xiaoliang Wan, Qifeng Liao
- Abstract summary: We present a dimension-reduced KRnet map approach (DR-KRnet) for high-dimensional inverse problems.
Our approach consists of two main components: data-driven VAE prior and density approximation of the posterior of the latent variable.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a dimension-reduced KRnet map approach (DR-KRnet) for
high-dimensional inverse problems, which is based on an explicit construction
of a map that pushes forward the prior measure to the posterior measure in the
latent space. Our approach consists of two main components: data-driven VAE
prior and density approximation of the posterior of the latent variable. In
reality, it may not be trivial to initialize a prior distribution that is
consistent with available prior data; in other words, the complex prior
information is often beyond simple hand-crafted priors. We employ variational
autoencoder (VAE) to approximate the underlying distribution of the prior
dataset, which is achieved through a latent variable and a decoder. Using the
decoder provided by the VAE prior, we reformulate the problem in a
low-dimensional latent space. In particular, we seek an invertible transport
map given by KRnet to approximate the posterior distribution of the latent
variable. Moreover, an efficient physics-constrained surrogate model without
any labeled data is constructed to reduce the computational cost of solving
both forward and adjoint problems involved in likelihood computation. Numerical
experiments are implemented to demonstrate the validity, accuracy, and
efficiency of DR-KRnet.
Related papers
- DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN
sampler [3.717366858126521]
The selection of the prior distribution is learned from, and therefore an important representation learning of, available prior measurements.
The SA-Roundtrip, a novel deep generative prior, is introduced to enable controlled sampling generation and identify the data's intrinsic dimension.
arXiv Detail & Related papers (2023-10-24T17:16:45Z) - Distributed Variational Inference for Online Supervised Learning [15.038649101409804]
This paper develops a scalable distributed probabilistic inference algorithm.
It applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks.
arXiv Detail & Related papers (2023-09-05T22:33:02Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Semi-supervised Invertible DeepONets for Bayesian Inverse Problems [8.594140167290098]
DeepONets offer a powerful, data-driven tool for solving parametric PDEs by learning operators.
In this work, we employ physics-informed DeepONets in the context of high-dimensional, Bayesian inverse problems.
arXiv Detail & Related papers (2022-09-06T18:55:06Z) - Deep Preconditioners and their application to seismic wavefield
processing [0.0]
Sparsity-promoting inversion, coupled with fixed-basis sparsifying transforms, represent the go-to approach for many processing tasks.
We propose to train an AutoEncoder network to learn a direct mapping between the input seismic data and a representative latent manifold.
The trained decoder is subsequently used as a nonlinear preconditioner for the physics-driven inverse problem at hand.
arXiv Detail & Related papers (2022-07-20T14:25:32Z) - Orthogonal Matrix Retrieval with Spatial Consensus for 3D Unknown-View
Tomography [58.60249163402822]
Unknown-view tomography (UVT) reconstructs a 3D density map from its 2D projections at unknown, random orientations.
The proposed OMR is more robust and performs significantly better than the previous state-of-the-art OMR approach.
arXiv Detail & Related papers (2022-07-06T21:40:59Z) - Information Entropy Initialized Concrete Autoencoder for Optimal Sensor
Placement and Reconstruction of Geophysical Fields [58.720142291102135]
We propose a new approach to the optimal placement of sensors for reconstructing geophysical fields from sparse measurements.
We demonstrate our method on the two examples: (a) temperature and (b) salinity fields around the Barents Sea and the Svalbard group of islands.
We find out that the obtained optimal sensor locations have clear physical interpretation and correspond to the boundaries between sea currents.
arXiv Detail & Related papers (2022-06-28T12:43:38Z) - Regressive Domain Adaptation for Unsupervised Keypoint Detection [67.2950306888855]
Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
arXiv Detail & Related papers (2021-03-10T16:45:22Z) - Generative Model without Prior Distribution Matching [26.91643368299913]
Variational Autoencoder (VAE) and its variations are classic generative models by learning a low-dimensional latent representation to satisfy some prior distribution.
We propose to let the prior match the embedding distribution rather than imposing the latent variables to fit the prior.
arXiv Detail & Related papers (2020-09-23T09:33:24Z) - Quantitative Understanding of VAE as a Non-linearly Scaled Isometric
Embedding [52.48298164494608]
Variational autoencoder (VAE) estimates the posterior parameters of latent variables corresponding to each input data.
This paper provides a quantitative understanding of VAE property through the differential geometric and information-theoretic interpretations of VAE.
arXiv Detail & Related papers (2020-07-30T02:37:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.