MetaDIP: Accelerating Deep Image Prior with Meta Learning
- URL: http://arxiv.org/abs/2209.08452v1
- Date: Sun, 18 Sep 2022 02:41:58 GMT
- Title: MetaDIP: Accelerating Deep Image Prior with Meta Learning
- Authors: Kevin Zhang, Mingyang Xie, Maharshi Gor, Yi-Ting Chen, Yvonne Zhou,
Christopher A. Metzler
- Abstract summary: We use meta-learning to massively accelerate DIP-based reconstructions.
We demonstrate a 10x improvement in runtimes across a range of inverse imaging tasks.
- Score: 15.847098400811188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep image prior (DIP) is a recently proposed technique for solving imaging
inverse problems by fitting the reconstructed images to the output of an
untrained convolutional neural network. Unlike pretrained feedforward neural
networks, the same DIP can generalize to arbitrary inverse problems, from
denoising to phase retrieval, while offering competitive performance at each
task. The central disadvantage of DIP is that, while feedforward neural
networks can reconstruct an image in a single pass, DIP must gradually update
its weights over hundreds to thousands of iterations, at a significant
computational cost. In this work we use meta-learning to massively accelerate
DIP-based reconstructions. By learning a proper initialization for the DIP
weights, we demonstrate a 10x improvement in runtimes across a range of inverse
imaging tasks. Moreover, we demonstrate that a network trained to quickly
reconstruct faces also generalizes to reconstructing natural image patches.
Related papers
- Analysis of Deep Image Prior and Exploiting Self-Guidance for Image
Reconstruction [13.277067849874756]
We study how DIP recovers information from undersampled imaging measurements.
We introduce a self-driven reconstruction process that concurrently optimize both the network weights and the input.
Our method incorporates a novel denoiser regularization term which enables robust and stable joint estimation of both the network input and reconstructed image.
arXiv Detail & Related papers (2024-02-06T15:52:23Z) - Deep Generalized Unfolding Networks for Image Restoration [16.943609020362395]
We propose a Deep Generalized Unfolding Network (DGUNet) for image restoration.
We integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm.
Our method is superior in terms of state-of-the-art performance, interpretability, and generalizability.
arXiv Detail & Related papers (2022-04-28T08:39:39Z) - Convolutional Analysis Operator Learning by End-To-End Training of
Iterative Neural Networks [3.6280929178575994]
We show how convolutional sparsifying filters can be efficiently learned by end-to-end training of iterative neural networks.
We evaluated our approach on a non-Cartesian 2D cardiac cine MRI example and show that the obtained filters are better suitable for the corresponding reconstruction algorithm than the ones obtained by decoupled pre-training.
arXiv Detail & Related papers (2022-03-04T07:32:16Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Deep Neural Networks are Surprisingly Reversible: A Baseline for
Zero-Shot Inversion [90.65667807498086]
This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal representation.
We empirically show that modern classification models on ImageNet can, surprisingly, be inverted, allowing an approximate recovery of the original 224x224px images from a representation after more than 20 layers.
arXiv Detail & Related papers (2021-07-13T18:01:43Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - BP-DIP: A Backprojection based Deep Image Prior [49.375539602228415]
We propose two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the degraded image; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works.
We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.
arXiv Detail & Related papers (2020-03-11T17:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.