BP-DIP: A Backprojection based Deep Image Prior
- URL: http://arxiv.org/abs/2003.05417v2
- Date: Tue, 30 Jun 2020 17:01:02 GMT
- Title: BP-DIP: A Backprojection based Deep Image Prior
- Authors: Jenny Zukerman, Tom Tirer, Raja Giryes
- Abstract summary: We propose two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the degraded image; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works.
We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.
- Score: 49.375539602228415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are a very powerful tool for many computer vision tasks,
including image restoration, exhibiting state-of-the-art results. However, the
performance of deep learning methods tends to drop once the observation model
used in training mismatches the one in test time. In addition, most deep
learning methods require vast amounts of training data, which are not
accessible in many applications. To mitigate these disadvantages, we propose to
combine two image restoration approaches: (i) Deep Image Prior (DIP), which
trains a convolutional neural network (CNN) from scratch in test time using the
given degraded image. It does not require any training data and builds on the
implicit prior imposed by the CNN architecture; and (ii) a backprojection (BP)
fidelity term, which is an alternative to the standard least squares loss that
is usually used in previous DIP works. We demonstrate the performance of the
proposed method, termed BP-DIP, on the deblurring task and show its advantages
over the plain DIP, with both higher PSNR values and better inference run-time.
Related papers
- Chasing Better Deep Image Priors between Over- and Under-parameterization [63.8954152220162]
We study a novel "lottery image prior" (LIP) by exploiting DNN inherent sparsity.
LIPworks significantly outperform deep decoders under comparably compact model sizes.
We also extend LIP to compressive sensing image reconstruction, where a pre-trained GAN generator is used as the prior.
arXiv Detail & Related papers (2024-10-31T17:49:44Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Residual Back Projection With Untrained Neural Networks [1.2707050104493216]
We present a framework for iterative reconstruction (IR) in computed tomography (CT)
Our framework incorporates this structural information as a deep image prior (DIP)
We propose using an untrained U-net in conjunction with a novel residual back projection to minimize an objective function and achieve high-accuracy reconstruction.
arXiv Detail & Related papers (2022-10-26T01:58:09Z) - MetaDIP: Accelerating Deep Image Prior with Meta Learning [15.847098400811188]
We use meta-learning to massively accelerate DIP-based reconstructions.
We demonstrate a 10x improvement in runtimes across a range of inverse imaging tasks.
arXiv Detail & Related papers (2022-09-18T02:41:58Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Deep Artifact-Free Residual Network for Single Image Super-Resolution [0.2399911126932526]
We propose Deep Artifact-Free Residual (DAFR) network which uses the merits of both residual learning and usage of ground-truth image as target.
Our framework uses a deep model to extract the high-frequency information which is necessary for high-quality image reconstruction.
Our experimental results show that the proposed method achieves better quantitative and qualitative image quality compared to the existing methods.
arXiv Detail & Related papers (2020-09-25T20:53:55Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - On the interplay between physical and content priors in deep learning
for computational imaging [5.486833154281385]
We use the Phase Extraction Neural Network (PhENN) for quantitative phase retrieval in a lensless phase imaging system.
We show that the two questions are related and share a common crux: the choice of the training examples.
We also discover that weaker regularization effect leads to better learning of the underlying propagation model.
arXiv Detail & Related papers (2020-04-14T08:36:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.