Unsupervised Knowledge-Transfer for Learned Image Reconstruction
- URL: http://arxiv.org/abs/2107.02572v1
- Date: Tue, 6 Jul 2021 12:19:16 GMT
- Title: Unsupervised Knowledge-Transfer for Learned Image Reconstruction
- Authors: Riccardo Barbano, Zeljko Kereta, Andreas Hauptmann, Simon R. Arridge,
Bangti Jin
- Abstract summary: We develop a novel unsupervised knowledge-transfer paradigm for learned iterative reconstruction within a Bayesian framework.
We show that the proposed framework significantly improves reconstruction quality not only visually, but also quantitatively in terms of PSNR and SSIM.
- Score: 4.183935970343543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based image reconstruction approaches have demonstrated
impressive empirical performance in many imaging modalities. These approaches
generally require a large amount of high-quality training data, which is often
not available. To circumvent this issue, we develop a novel unsupervised
knowledge-transfer paradigm for learned iterative reconstruction within a
Bayesian framework. The proposed approach learns an iterative reconstruction
network in two phases. The first phase trains a reconstruction network with a
set of ordered pairs comprising of ground truth images and measurement data.
The second phase fine-tunes the pretrained network to the measurement data
without supervision. Furthermore, the framework delivers uncertainty
information over the reconstructed image. We present extensive experimental
results on low-dose and sparse-view computed tomography, showing that the
proposed framework significantly improves reconstruction quality not only
visually, but also quantitatively in terms of PSNR and SSIM, and is competitive
with several state-of-the-art supervised and unsupervised reconstruction
techniques.
Related papers
- Analysis of Deep Image Prior and Exploiting Self-Guidance for Image
Reconstruction [13.277067849874756]
We study how DIP recovers information from undersampled imaging measurements.
We introduce a self-driven reconstruction process that concurrently optimize both the network weights and the input.
Our method incorporates a novel denoiser regularization term which enables robust and stable joint estimation of both the network input and reconstructed image.
arXiv Detail & Related papers (2024-02-06T15:52:23Z) - Enhancing Low-dose CT Image Reconstruction by Integrating Supervised and
Unsupervised Learning [13.17680480211064]
We propose a hybrid supervised-unsupervised learning framework for X-ray computed tomography (CT) image reconstruction.
Each proposed trained block consists of a deterministic MBIR solver and a neural network.
We demonstrate the efficacy of this learned hybrid model for low-dose CT image reconstruction with limited training data.
arXiv Detail & Related papers (2023-11-19T20:23:59Z) - Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations [69.25683256447044]
We introduce an attack approach designed to effectively degrade the reconstruction quality of Learned Image Compression (LIC)
We generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples.
Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness.
arXiv Detail & Related papers (2023-06-01T20:21:05Z) - Uncertainty-Aware Null Space Networks for Data-Consistent Image
Reconstruction [0.0]
State-of-the-art reconstruction methods have been developed based on recent advances in deep learning.
For such approaches to be used in safety-critical domains such as medical imaging, the network reconstruction should not only provide the user with a reconstructed image, but also with some level of confidence in the reconstruction.
This work is the first approach to solving inverse problems that additionally models data-dependent uncertainty by estimating an input-dependent scale map.
arXiv Detail & Related papers (2023-04-14T06:58:44Z) - Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Residual Back Projection With Untrained Neural Networks [1.2707050104493216]
We present a framework for iterative reconstruction (IR) in computed tomography (CT)
Our framework incorporates this structural information as a deep image prior (DIP)
We propose using an untrained U-net in conjunction with a novel residual back projection to minimize an objective function and achieve high-accuracy reconstruction.
arXiv Detail & Related papers (2022-10-26T01:58:09Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Shared Prior Learning of Energy-Based Models for Image Reconstruction [69.72364451042922]
We propose a novel learning-based framework for image reconstruction particularly designed for training without ground truth data.
In the absence of ground truth data, we change the loss functional to a patch-based Wasserstein functional.
In shared prior learning, both aforementioned optimal control problems are optimized simultaneously with shared learned parameters of the regularizer.
arXiv Detail & Related papers (2020-11-12T17:56:05Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.