Conditional Variational Autoencoder for Learned Image Reconstruction
- URL: http://arxiv.org/abs/2110.11681v2
- Date: Mon, 25 Oct 2021 01:10:52 GMT
- Title: Conditional Variational Autoencoder for Learned Image Reconstruction
- Authors: Chen Zhang and Riccardo Barbano and Bangti Jin
- Abstract summary: We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
- Score: 5.487951901731039
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learned image reconstruction techniques using deep neural networks have
recently gained popularity, and have delivered promising empirical results.
However, most approaches focus on one single recovery for each observation, and
thus neglect the uncertainty information. In this work, we develop a novel
computational framework that approximates the posterior distribution of the
unknown image at each query observation. The proposed framework is very
flexible: It handles implicit noise models and priors, it incorporates the data
formation process (i.e., the forward operator), and the learned reconstructive
properties are transferable between different datasets. Once the network is
trained using the conditional variational autoencoder loss, it provides a
computationally efficient sampler for the approximate posterior distribution
via feed-forward propagation, and the summarizing statistics of the generated
samples are used for both point-estimation and uncertainty quantification. We
illustrate the proposed framework with extensive numerical experiments on
positron emission tomography (with both moderate and low count levels) showing
that the framework generates high-quality samples when compared with
state-of-the-art methods.
Related papers
- ReNoise: Real Image Inversion Through Iterative Noising [62.96073631599749]
We introduce an inversion method with a high quality-to-operation ratio, enhancing reconstruction accuracy without increasing the number of operations.
We evaluate the performance of our ReNoise technique using various sampling algorithms and models, including recent accelerated diffusion models.
arXiv Detail & Related papers (2024-03-21T17:52:08Z) - Diffusion Posterior Proximal Sampling for Image Restoration [27.35952624032734]
We present a refined paradigm for diffusion-based image restoration.
Specifically, we opt for a sample consistent with the measurement identity at each generative step.
The number of candidate samples used for selection is adaptively determined based on the signal-to-noise ratio of the timestep.
arXiv Detail & Related papers (2024-02-25T04:24:28Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Stable Deep MRI Reconstruction using Generative Priors [13.400444194036101]
We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only.
The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods.
arXiv Detail & Related papers (2022-10-25T08:34:29Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Score-based diffusion models for accelerated MRI [35.3148116010546]
We introduce a way to sample data from a conditional distribution given the measurements, such that the model can be readily used for solving inverse problems in imaging.
Our model requires magnitude images only for training, and yet is able to reconstruct complex-valued data, and even extends to parallel imaging.
arXiv Detail & Related papers (2021-10-08T08:42:03Z) - Scene Uncertainty and the Wellington Posterior of Deterministic Image
Classifiers [68.9065881270224]
We introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene.
We explore the use of data augmentation, dropout, ensembling, single-view reconstruction, and model linearization to compute a Wellington Posterior.
Additional methods include the use of conditional generative models such as generative adversarial networks, neural radiance fields, and conditional prior networks.
arXiv Detail & Related papers (2021-06-25T20:10:00Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Quantifying Model Uncertainty in Inverse Problems via Bayesian Deep
Gradient Descent [4.029853654012035]
Recent advances in inverse problems leverage powerful data-driven models, e.g., deep neural networks.
We develop a scalable, data-driven, knowledge-aided computational framework to quantify the model uncertainty via Bayesian neural networks.
arXiv Detail & Related papers (2020-07-20T09:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.