Compressed Sensing MRI Reconstruction Regularized by VAEs with
Structured Image Covariance
- URL: http://arxiv.org/abs/2210.14586v2
- Date: Fri, 16 Jun 2023 12:10:44 GMT
- Title: Compressed Sensing MRI Reconstruction Regularized by VAEs with
Structured Image Covariance
- Authors: Margaret Duff, Ivor J. A. Simpson, Matthias J. Ehrhardt, Neill D. F.
Campbell
- Abstract summary: This paper investigates how generative models, trained on ground-truth images, can be used changesas priors for inverse problems.
We utilize variational autoencoders (VAEs) that generate not only an image but also a covariance uncertainty matrix for each image.
We compare our proposed learned regularization against other unlearned regularization approaches and unsupervised and supervised deep learning methods.
- Score: 7.544757765701024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: This paper investigates how generative models, trained on
ground-truth images, can be used \changes{as} priors for inverse problems,
penalizing reconstructions far from images the generator can produce. The aim
is that learned regularization will provide complex data-driven priors to
inverse problems while still retaining the control and insight of a variational
regularization method. Moreover, unsupervised learning, without paired training
data, allows the learned regularizer to remain flexible to changes in the
forward problem such as noise level, sampling pattern or coil sensitivities in
MRI.
Approach: We utilize variational autoencoders (VAEs) that generate not only
an image but also a covariance uncertainty matrix for each image. The
covariance can model changing uncertainty dependencies caused by structure in
the image, such as edges or objects, and provides a new distance metric from
the manifold of learned images.
Main results: We evaluate these novel generative regularizers on
retrospectively sub-sampled real-valued MRI measurements from the fastMRI
dataset. We compare our proposed learned regularization against other unlearned
regularization approaches and unsupervised and supervised deep learning
methods.
Significance: Our results show that the proposed method is competitive with
other state-of-the-art methods and behaves consistently with changing sampling
patterns and noise levels.
Related papers
- RIDE: Self-Supervised Learning of Rotation-Equivariant Keypoint
Detection and Invariant Description for Endoscopy [83.4885991036141]
RIDE is a learning-based method for rotation-equivariant detection and invariant description.
It is trained in a self-supervised manner on a large curation of endoscopic images.
It sets a new state-of-the-art performance on matching and relative pose estimation tasks.
arXiv Detail & Related papers (2023-09-18T08:16:30Z) - Stable Deep MRI Reconstruction using Generative Priors [13.400444194036101]
We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only.
The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods.
arXiv Detail & Related papers (2022-10-25T08:34:29Z) - PatchNR: Learning from Small Data by Patch Normalizing Flow
Regularization [57.37911115888587]
We introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows.
Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images.
arXiv Detail & Related papers (2022-05-24T12:14:26Z) - Validation and Generalizability of Self-Supervised Image Reconstruction
Methods for Undersampled MRI [4.832984894979636]
Two self-supervised algorithms based on self-supervised denoising and neural network image priors were investigated.
Their generalizability was tested with prospectively under-sampled data from experimental conditions different to the training.
arXiv Detail & Related papers (2022-01-29T09:06:04Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial
Transformers [0.0]
We introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adrial TransformERs (SLATER)
A zero-shot reconstruction is performed on undersampled test data, where inference is performed by optimizing network parameters.
Experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against several state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-05-15T02:01:21Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - Joint reconstruction and bias field correction for undersampled MR
imaging [7.409376558513677]
Undersampling the k-space in MRI allows saving precious acquisition time, yet results in an ill-posed inversion problem.
Deep learning schemes are susceptible to differences between the training data and the image to be reconstructed at test time.
In this work, we address the sensitivity of the reconstruction problem to the bias field and propose to model it explicitly in the reconstruction.
arXiv Detail & Related papers (2020-07-26T12:58:34Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Fully Unsupervised Diversity Denoising with Convolutional Variational
Autoencoders [81.30960319178725]
We propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs)
First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder.
We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training.
arXiv Detail & Related papers (2020-06-10T21:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.