Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN
sampler
- URL: http://arxiv.org/abs/2310.17817v1
- Date: Tue, 24 Oct 2023 17:16:45 GMT
- Title: Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN
sampler
- Authors: Jiayu Qian, Yuanyuan Liu, Jingya Yang and Qingping Zhou
- Abstract summary: The selection of the prior distribution is learned from, and therefore an important representation learning of, available prior measurements.
The SA-Roundtrip, a novel deep generative prior, is introduced to enable controlled sampling generation and identify the data's intrinsic dimension.
- Score: 3.717366858126521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian inference with deep generative prior has received considerable
interest for solving imaging inverse problems in many scientific and
engineering fields. The selection of the prior distribution is learned from,
and therefore an important representation learning of, available prior
measurements. The SA-Roundtrip, a novel deep generative prior, is introduced to
enable controlled sampling generation and identify the data's intrinsic
dimension. This prior incorporates a self-attention structure within a
bidirectional generative adversarial network. Subsequently, Bayesian inference
is applied to the posterior distribution in the low-dimensional latent space
using the Hamiltonian Monte Carlo with preconditioned Crank-Nicolson (HMC-pCN)
algorithm, which is proven to be ergodic under specific conditions. Experiments
conducted on computed tomography (CT) reconstruction with the MNIST and
TomoPhantom datasets reveal that the proposed method outperforms
state-of-the-art comparisons, consistently yielding a robust and superior point
estimator along with precise uncertainty quantification.
Related papers
- Quasi-Bayes meets Vines [2.3124143670964448]
We propose a different way to extend Quasi-Bayesian prediction to high dimensions through the use of Sklar's theorem.
We show that our proposed Quasi-Bayesian Vine (QB-Vine) is a fully non-parametric density estimator with emphan analytical form.
arXiv Detail & Related papers (2024-06-18T16:31:02Z) - Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors [5.54475507578913]
The distribution of the weights of modern deep neural networks (DNNs) is an eminently complex object due to its extremely high dimensionality.
This paper proposes one of the first large-scale explorations of the posterior distribution of BNNs, expanding its study to real-world vision tasks and architectures.
arXiv Detail & Related papers (2023-10-12T12:45:13Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Object based Bayesian full-waveform inversion for shear elastography [0.0]
We develop a computational framework to quantify uncertainty in shear elastography imaging of anomalies in tissues.
We find the posterior probability of parameter fields representing the geometry of the anomalies and their shear moduli.
We demonstrate the approach on synthetic two dimensional tests with smooth and irregular shapes.
arXiv Detail & Related papers (2023-05-11T08:25:25Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Bayesian neural network priors for edge-preserving inversion [3.2046720177804646]
A class of prior distributions based on the output of neural networks with heavy-tailed weights is introduced.
We show theoretically that samples from such priors have desirable discontinuous-like properties even when the network width is finite.
arXiv Detail & Related papers (2021-12-20T16:39:05Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Instance-Optimal Compressed Sensing via Posterior Sampling [101.43899352984774]
We show for Gaussian measurements and emphany prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees.
We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
arXiv Detail & Related papers (2021-06-21T22:51:56Z) - What Are Bayesian Neural Network Posteriors Really Like? [63.950151520585024]
We show that Hamiltonian Monte Carlo can achieve significant performance gains over standard and deep ensembles.
We also show that deep distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
arXiv Detail & Related papers (2021-04-29T15:38:46Z) - Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks:
Theory, Methods, and Algorithms [2.266704469122763]
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data.
We establish the existence and well-posedness of the associated posterior moments under easily verifiable conditions.
A model accuracy analysis suggests that the Bayesian probability probabilities reported by the data-driven models are also remarkably accurate under a frequentist definition.
arXiv Detail & Related papers (2021-03-18T11:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.