NAS-DIP: Learning Deep Image Prior with Neural Architecture Search
- URL: http://arxiv.org/abs/2008.11713v1
- Date: Wed, 26 Aug 2020 17:59:36 GMT
- Title: NAS-DIP: Learning Deep Image Prior with Neural Architecture Search
- Authors: Yun-Chun Chen, Chen Gao, Esther Robb, Jia-Bin Huang
- Abstract summary: Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
- Score: 65.79109790446257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that the structure of deep convolutional neural
networks can be used as a structured image prior for solving various inverse
image restoration tasks. Instead of using hand-designed architectures, we
propose to search for neural architectures that capture stronger image priors.
Building upon a generic U-Net architecture, our core contribution lies in
designing new search spaces for (1) an upsampling cell and (2) a pattern of
cross-scale residual connections. We search for an improved network by
leveraging an existing neural architecture search algorithm (using
reinforcement learning with a recurrent neural network controller). We validate
the effectiveness of our method via a wide variety of applications, including
image restoration, dehazing, image-to-image translation, and matrix
factorization. Extensive experimental results show that our algorithm performs
favorably against state-of-the-art learning-free approaches and reaches
competitive performance with existing learning-based methods in some cases.
Related papers
- Research on Image Super-Resolution Reconstruction Mechanism based on Convolutional Neural Network [8.739451985459638]
Super-resolution algorithms transform one or more sets of low-resolution images captured from the same scene into high-resolution images.
The extraction of image features and nonlinear mapping methods in the reconstruction process remain challenging for existing algorithms.
The objective is to recover high-quality, high-resolution images from low-resolution images.
arXiv Detail & Related papers (2024-07-18T06:50:39Z) - DQNAS: Neural Architecture Search using Reinforcement Learning [6.33280703577189]
Convolutional Neural Networks have been used in a variety of image related applications.
In this paper, we propose an automated Neural Architecture Search framework, guided by the principles of Reinforcement Learning.
arXiv Detail & Related papers (2023-01-17T04:01:47Z) - Single Cell Training on Architecture Search for Image Denoising [16.72206392993489]
We re-frame the optimal search problem by focusing at component block level.
In addition, we integrate an innovative dimension matching modules for dealing with spatial and channel-wise mismatch.
Our proposed Denoising Prior Neural Architecture Search (DPNAS) was demonstrated by having it complete an optimal architecture search for an image restoration task by just one day with a single GPU.
arXiv Detail & Related papers (2022-12-13T04:47:24Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Deep Unrolled Network for Video Super-Resolution [0.45880283710344055]
Video super-resolution (VSR) aims to reconstruct a sequence of high-resolution (HR) images from their corresponding low-resolution (LR) versions.
Traditionally, solving a VSR problem has been based on iterative algorithms that exploit prior knowledge on image formation and assumptions on the motion.
Deep learning (DL) algorithms can efficiently learn spatial patterns from large collections of images.
We propose a new VSR neural network based on unrolled optimization techniques and discuss its performance.
arXiv Detail & Related papers (2021-02-23T14:35:09Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Neural Architecture Search for Compressed Sensing Magnetic Resonance
Image Reconstruction [36.636219616998225]
We propose a novel and efficient network for the MR image reconstruction problem via NAS instead of manual attempts.
Experimental results show that our searched network can produce better reconstruction results compared to previous state-of-the-art methods.
Our proposed method can reach a better trade-off between cost and reconstruction performance for MR reconstruction problem with good generalizability.
arXiv Detail & Related papers (2020-02-22T04:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.