ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image
Prior
- URL: http://arxiv.org/abs/2111.15362v1
- Date: Sat, 27 Nov 2021 13:53:25 GMT
- Title: ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image
Prior
- Authors: Metin Ersin Arican, Ozgur Kara, Gustav Bredell and Ender Konukoglu
- Abstract summary: We show that optimal neural architectures in the DIP framework are image-dependent.
We propose an image-specific NAS strategy for the DIP framework that requires substantially less training than typical NAS approaches.
Our experiments show that image-specific metrics can reduce the search space to a small cohort of models, of which the best model outperforms current NAS approaches for image restoration.
- Score: 6.098254376499899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works show that convolutional neural network (CNN) architectures have
a spectral bias towards lower frequencies, which has been leveraged for various
image restoration tasks in the Deep Image Prior (DIP) framework. The benefit of
the inductive bias the network imposes in the DIP framework depends on the
architecture. Therefore, researchers have studied how to automate the search to
determine the best-performing model. However, common neural architecture search
(NAS) techniques are resource and time-intensive. Moreover, best-performing
models are determined for a whole dataset of images instead of for each image
independently, which would be prohibitively expensive. In this work, we first
show that optimal neural architectures in the DIP framework are
image-dependent. Leveraging this insight, we then propose an image-specific NAS
strategy for the DIP framework that requires substantially less training than
typical NAS approaches, effectively enabling image-specific NAS. For a given
image, noise is fed to a large set of untrained CNNs, and their outputs' power
spectral densities (PSD) are compared to that of the corrupted image using
various metrics. Based on this, a small cohort of image-specific architectures
is chosen and trained to reconstruct the corrupted image. Among this cohort,
the model whose reconstruction is closest to the average of the reconstructed
images is chosen as the final model. We justify the proposed strategy's
effectiveness by (1) demonstrating its performance on a NAS Dataset for DIP
that includes 500+ models from a particular search space (2) conducting
extensive experiments on image denoising, inpainting, and super-resolution
tasks. Our experiments show that image-specific metrics can reduce the search
space to a small cohort of models, of which the best model outperforms current
NAS approaches for image restoration.
Related papers
- Parameter-Inverted Image Pyramid Networks [49.35689698870247]
We propose a novel network architecture known as the Inverted Image Pyramid Networks (PIIP)
Our core idea is to use models with different parameter sizes to process different resolution levels of the image pyramid.
PIIP achieves superior performance in tasks such as object detection, segmentation, and image classification.
arXiv Detail & Related papers (2024-06-06T17:59:10Z) - HNAS-reg: hierarchical neural architecture search for deformable medical
image registration [0.8249180979158817]
This paper presents a hierarchical NAS framework (HNAS-Reg) to identify the optimal network architecture for deformable medical image registration.
Experiments on three datasets, consisting of 636 T1-weighted magnetic resonance images (MRIs), have demonstrated that the proposal method can build a deep learning model with improved image registration accuracy and reduced model size.
arXiv Detail & Related papers (2023-08-23T21:47:28Z) - Single Cell Training on Architecture Search for Image Denoising [16.72206392993489]
We re-frame the optimal search problem by focusing at component block level.
In addition, we integrate an innovative dimension matching modules for dealing with spatial and channel-wise mismatch.
Our proposed Denoising Prior Neural Architecture Search (DPNAS) was demonstrated by having it complete an optimal architecture search for an image restoration task by just one day with a single GPU.
arXiv Detail & Related papers (2022-12-13T04:47:24Z) - OSLO: On-the-Sphere Learning for Omnidirectional images and its
application to 360-degree image compression [59.58879331876508]
We study the learning of representation models for omnidirectional images and propose to use the properties of HEALPix uniform sampling of the sphere to redefine the mathematical tools used in deep learning models for omnidirectional images.
Our proposed on-the-sphere solution leads to a better compression gain that can save 13.7% of the bit rate compared to similar learned models applied to equirectangular images.
arXiv Detail & Related papers (2021-07-19T22:14:30Z) - Searching Efficient Model-guided Deep Network for Image Denoising [61.65776576769698]
We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T14:03:01Z) - Learning Versatile Neural Architectures by Propagating Network Codes [74.2450894473073]
We propose a novel "neural predictor", which is able to predict an architecture's performance in multiple datasets and tasks.
NCP learns from network codes but not original data, enabling it to update the architecture efficiently across datasets.
arXiv Detail & Related papers (2021-03-24T15:20:38Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Parkinson's Disease Detection with Ensemble Architectures based on
ILSVRC Models [1.8884278918443564]
We explore various neural network architectures using Magnetic Resonance (MR) T1 images of the brain to identify Parkinson's Disease (PD)
All of our proposed architectures outperform existing approaches to detect PD from MR images, achieving upto 95% detection accuracy.
Our finding suggests a promising direction when no or insufficient training data is available.
arXiv Detail & Related papers (2020-07-23T05:40:47Z) - BP-DIP: A Backprojection based Deep Image Prior [49.375539602228415]
We propose two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the degraded image; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works.
We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.
arXiv Detail & Related papers (2020-03-11T17:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.