Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN
sampler
- URL: http://arxiv.org/abs/2310.17817v1
- Date: Tue, 24 Oct 2023 17:16:45 GMT
- Title: Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN
sampler
- Authors: Jiayu Qian, Yuanyuan Liu, Jingya Yang and Qingping Zhou
- Abstract summary: The selection of the prior distribution is learned from, and therefore an important representation learning of, available prior measurements.
The SA-Roundtrip, a novel deep generative prior, is introduced to enable controlled sampling generation and identify the data's intrinsic dimension.
- Score: 3.717366858126521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian inference with deep generative prior has received considerable
interest for solving imaging inverse problems in many scientific and
engineering fields. The selection of the prior distribution is learned from,
and therefore an important representation learning of, available prior
measurements. The SA-Roundtrip, a novel deep generative prior, is introduced to
enable controlled sampling generation and identify the data's intrinsic
dimension. This prior incorporates a self-attention structure within a
bidirectional generative adversarial network. Subsequently, Bayesian inference
is applied to the posterior distribution in the low-dimensional latent space
using the Hamiltonian Monte Carlo with preconditioned Crank-Nicolson (HMC-pCN)
algorithm, which is proven to be ergodic under specific conditions. Experiments
conducted on computed tomography (CT) reconstruction with the MNIST and
TomoPhantom datasets reveal that the proposed method outperforms
state-of-the-art comparisons, consistently yielding a robust and superior point
estimator along with precise uncertainty quantification.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Function-Space MCMC for Bayesian Wide Neural Networks [9.899763598214124]
We investigate the use of the preconditioned Crank-Nicolson algorithm and its Langevin version to sample from the reparametrised posterior distribution of the weights.
We prove that the acceptance probabilities of the proposed methods approach 1 as the width of the network increases.
arXiv Detail & Related papers (2024-08-26T14:54:13Z) - Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors [5.54475507578913]
The distribution of the weights of modern deep neural networks (DNNs) is an eminently complex object due to its extremely high dimensionality.
This paper proposes one of the first large-scale explorations of the posterior distribution of BNNs, expanding its study to real-world vision tasks and architectures.
arXiv Detail & Related papers (2023-10-12T12:45:13Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Bayesian neural network priors for edge-preserving inversion [3.2046720177804646]
A class of prior distributions based on the output of neural networks with heavy-tailed weights is introduced.
We show theoretically that samples from such priors have desirable discontinuous-like properties even when the network width is finite.
arXiv Detail & Related papers (2021-12-20T16:39:05Z) - Instance-Optimal Compressed Sensing via Posterior Sampling [101.43899352984774]
We show for Gaussian measurements and emphany prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees.
We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
arXiv Detail & Related papers (2021-06-21T22:51:56Z) - What Are Bayesian Neural Network Posteriors Really Like? [63.950151520585024]
We show that Hamiltonian Monte Carlo can achieve significant performance gains over standard and deep ensembles.
We also show that deep distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
arXiv Detail & Related papers (2021-04-29T15:38:46Z) - Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks:
Theory, Methods, and Algorithms [2.266704469122763]
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data.
We establish the existence and well-posedness of the associated posterior moments under easily verifiable conditions.
A model accuracy analysis suggests that the Bayesian probability probabilities reported by the data-driven models are also remarkably accurate under a frequentist definition.
arXiv Detail & Related papers (2021-03-18T11:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.