Accelerated MRI with Un-trained Neural Networks
- URL: http://arxiv.org/abs/2007.02471v3
- Date: Tue, 27 Apr 2021 18:23:17 GMT
- Title: Accelerated MRI with Un-trained Neural Networks
- Authors: Mohammad Zalbagi Darestani and Reinhard Heckel
- Abstract summary: We address the reconstruction problem arising in accelerated MRI with un-trained neural networks.
We propose a highly optimized un-trained recovery approach based on a variation of the Deep Decoder.
We find that our un-trained algorithm achieves similar performance to a baseline trained neural network, but a state-of-the-art trained network outperforms the un-trained one.
- Score: 29.346778609548995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNNs) are highly effective for image
reconstruction problems. Typically, CNNs are trained on large amounts of
training images. Recently, however, un-trained CNNs such as the Deep Image
Prior and Deep Decoder have achieved excellent performance for image
reconstruction problems such as denoising and inpainting, \emph{without using
any training data}. Motivated by this development, we address the
reconstruction problem arising in accelerated MRI with un-trained neural
networks. We propose a highly optimized un-trained recovery approach based on a
variation of the Deep Decoder and show that it significantly outperforms other
un-trained methods, in particular sparsity-based classical compressed sensing
methods and naive applications of un-trained neural networks. We also compare
performance (both in terms of reconstruction accuracy and computational cost)
in an ideal setup for trained methods, specifically on the fastMRI dataset,
where the training and test data come from the same distribution. We find that
our un-trained algorithm achieves similar performance to a baseline trained
neural network, but a state-of-the-art trained network outperforms the
un-trained one. Finally, we perform a comparison on a non-ideal setup where the
train and test distributions are slightly different, and find that our
un-trained method achieves similar performance to a state-of-the-art
accelerated MRI reconstruction method.
Related papers
- Image edge enhancement for effective image classification [7.470763273994321]
We propose an edge enhancement-based method to enhance both accuracy and training speed of neural networks.
Our approach involves extracting high frequency features, such as edges, from images within the available dataset and fusing them with the original images.
arXiv Detail & Related papers (2024-01-13T10:01:34Z) - Comparison between layer-to-layer network training and conventional
network training using Deep Convolutional Neural Networks [0.6853165736531939]
Convolutional neural networks (CNNs) are widely used in various applications due to their effectiveness in extracting features from data.
We propose a layer-to-layer training method and compare its performance with the conventional training method.
Our experiments show that the layer-to-layer training method outperforms the conventional training method for both models.
arXiv Detail & Related papers (2023-03-27T14:29:18Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - Convolutional Analysis Operator Learning by End-To-End Training of
Iterative Neural Networks [3.6280929178575994]
We show how convolutional sparsifying filters can be efficiently learned by end-to-end training of iterative neural networks.
We evaluated our approach on a non-Cartesian 2D cardiac cine MRI example and show that the obtained filters are better suitable for the corresponding reconstruction algorithm than the ones obtained by decoupled pre-training.
arXiv Detail & Related papers (2022-03-04T07:32:16Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - An End-To-End-Trainable Iterative Network Architecture for Accelerated
Radial Multi-Coil 2D Cine MR Image Reconstruction [4.233498905999929]
We propose a CNN-architecture for image reconstruction of accelerated 2D radial cine MRI with multiple receiver coils.
We investigate the proposed training-strategy and compare our method to other well-known reconstruction techniques with learned and non-learned regularization methods.
arXiv Detail & Related papers (2021-02-01T11:42:04Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Multi-fidelity Neural Architecture Search with Knowledge Distillation [69.09782590880367]
We propose a bayesian multi-fidelity method for neural architecture search: MF-KD.
Knowledge distillation adds to a loss function a term forcing a network to mimic some teacher network.
We show that training for a few epochs with such a modified loss function leads to a better selection of neural architectures than training for a few epochs with a logistic loss.
arXiv Detail & Related papers (2020-06-15T12:32:38Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.