Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation
- URL: http://arxiv.org/abs/2005.03991v1
- Date: Thu, 7 May 2020 15:57:25 GMT
- Title: Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation
- Authors: Reinhard Heckel and Mahdi Soltanolkotabi
- Abstract summary: Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
- Score: 60.80172153614544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Un-trained convolutional neural networks have emerged as highly successful
tools for image recovery and restoration. They are capable of solving standard
inverse problems such as denoising and compressive sensing with excellent
results by simply fitting a neural network model to measurements from a single
image or signal without the need for any additional training data. For some
applications, this critically requires additional regularization in the form of
early stopping the optimization. For signal recovery from a few measurements,
however, un-trained convolutional networks have an intriguing self-regularizing
property: Even though the network can perfectly fit any image, the network
recovers a natural image from few measurements when trained with gradient
descent until convergence. In this paper, we provide numerical evidence for
this property and study it theoretically. We show that---without any further
regularization---an un-trained convolutional neural network can approximately
reconstruct signals and images that are sufficiently structured, from a near
minimal number of random measurements.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Untrained neural network embedded Fourier phase retrieval from few
measurements [8.914156789222266]
This paper proposes an untrained neural network embedded algorithm to solve FPR with few measurements.
We use a generative network to represent the image to be recovered, which confines the image to the space defined by the network structure.
To reduce the computational cost mainly caused by the parameter updates of the untrained NN, we develop an accelerated algorithm that adaptively trades off between explicit and implicit regularization.
arXiv Detail & Related papers (2023-07-16T16:23:50Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - Convolutional Analysis Operator Learning by End-To-End Training of
Iterative Neural Networks [3.6280929178575994]
We show how convolutional sparsifying filters can be efficiently learned by end-to-end training of iterative neural networks.
We evaluated our approach on a non-Cartesian 2D cardiac cine MRI example and show that the obtained filters are better suitable for the corresponding reconstruction algorithm than the ones obtained by decoupled pre-training.
arXiv Detail & Related papers (2022-03-04T07:32:16Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Self-supervised Neural Networks for Spectral Snapshot Compressive
Imaging [15.616674529295366]
We consider using untrained neural networks to solve the reconstruction problem of snapshot compressive imaging (SCI)
In this paper, inspired by the untrained neural networks such as deep image priors (DIP) and deep decoders, we develop a framework by integrating DIP into the plug-and-play regime, leading to a self-supervised network for spectral SCI reconstruction.
arXiv Detail & Related papers (2021-08-28T14:17:38Z) - Accelerated MRI with Un-trained Neural Networks [29.346778609548995]
We address the reconstruction problem arising in accelerated MRI with un-trained neural networks.
We propose a highly optimized un-trained recovery approach based on a variation of the Deep Decoder.
We find that our un-trained algorithm achieves similar performance to a baseline trained neural network, but a state-of-the-art trained network outperforms the un-trained one.
arXiv Detail & Related papers (2020-07-06T00:01:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.