Convolutional Analysis Operator Learning by End-To-End Training of
Iterative Neural Networks
- URL: http://arxiv.org/abs/2203.02166v1
- Date: Fri, 4 Mar 2022 07:32:16 GMT
- Title: Convolutional Analysis Operator Learning by End-To-End Training of
Iterative Neural Networks
- Authors: Andreas Kofler, Christian Wald, Tobias Schaeffter, Markus Haltmeier,
Christoph Kolbitsch
- Abstract summary: We show how convolutional sparsifying filters can be efficiently learned by end-to-end training of iterative neural networks.
We evaluated our approach on a non-Cartesian 2D cardiac cine MRI example and show that the obtained filters are better suitable for the corresponding reconstruction algorithm than the ones obtained by decoupled pre-training.
- Score: 3.6280929178575994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The concept of sparsity has been extensively applied for regularization in
image reconstruction. Typically, sparsifying transforms are either pre-trained
on ground-truth images or adaptively trained during the reconstruction.
Thereby, learning algorithms are designed to minimize some target function
which encodes the desired properties of the transform. However, this procedure
ignores the subsequently employed reconstruction algorithm as well as the
physical model which is responsible for the image formation process. Iterative
neural networks - which contain the physical model - can overcome these issues.
In this work, we demonstrate how convolutional sparsifying filters can be
efficiently learned by end-to-end training of iterative neural networks. We
evaluated our approach on a non-Cartesian 2D cardiac cine MRI example and show
that the obtained filters are better suitable for the corresponding
reconstruction algorithm than the ones obtained by decoupled pre-training.
Related papers
- Reinforcement Learning for Sampling on Temporal Medical Imaging
Sequences [0.0]
In this work, we apply double deep Q-learning and REINFORCE algorithms to learn the sampling strategy for dynamic image reconstruction.
We consider the data in the format of time series, and the reconstruction method is a pre-trained autoencoder-typed neural network.
We present a proof of concept that reinforcement learning algorithms are effective to discover the optimal sampling pattern.
arXiv Detail & Related papers (2023-08-28T23:55:23Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - GraDIRN: Learning Iterative Gradient Descent-based Energy Minimization
for Deformable Image Registration [9.684786294246749]
We present a Gradient Descent-based Image Registration Network (GraDIRN) for learning deformable image registration.
GraDIRN is based on multi-resolution gradient descent energy minimization.
We demonstrate that this approach achieves state-of-the-art registration performance while using fewer learnable parameters.
arXiv Detail & Related papers (2021-12-07T14:48:31Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - An End-To-End-Trainable Iterative Network Architecture for Accelerated
Radial Multi-Coil 2D Cine MR Image Reconstruction [4.233498905999929]
We propose a CNN-architecture for image reconstruction of accelerated 2D radial cine MRI with multiple receiver coils.
We investigate the proposed training-strategy and compare our method to other well-known reconstruction techniques with learned and non-learned regularization methods.
arXiv Detail & Related papers (2021-02-01T11:42:04Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Accelerated MRI with Un-trained Neural Networks [29.346778609548995]
We address the reconstruction problem arising in accelerated MRI with un-trained neural networks.
We propose a highly optimized un-trained recovery approach based on a variation of the Deep Decoder.
We find that our un-trained algorithm achieves similar performance to a baseline trained neural network, but a state-of-the-art trained network outperforms the un-trained one.
arXiv Detail & Related papers (2020-07-06T00:01:25Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Computational optimization of convolutional neural networks using
separated filters architecture [69.73393478582027]
We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
arXiv Detail & Related papers (2020-02-18T17:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.