Neural Architecture Search for Image Super-Resolution Using Densely
Constructed Search Space: DeCoNAS
- URL: http://arxiv.org/abs/2104.09048v1
- Date: Mon, 19 Apr 2021 04:51:16 GMT
- Title: Neural Architecture Search for Image Super-Resolution Using Densely
Constructed Search Space: DeCoNAS
- Authors: Joon Young Ahn and Nam Ik Cho
- Abstract summary: We use neural architecture search (NAS) methods to find a lightweight densely connected network named DeCoNASNet.
We define a complexity-based penalty for solving image super-resolution, which can be considered a multi-objective problem.
Experiments show that our DeCoNASNet outperforms the state-of-the-art lightweight super-resolution networks designed by handcraft methods and existing NAS-based design.
- Score: 18.191710317555952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent progress of deep convolutional neural networks has enabled great
success in single image super-resolution (SISR) and many other vision tasks.
Their performances are also being increased by deepening the networks and
developing more sophisticated network structures. However, finding an optimal
structure for the given problem is a difficult task, even for human experts.
For this reason, neural architecture search (NAS) methods have been introduced,
which automate the procedure of constructing the structures. In this paper, we
expand the NAS to the super-resolution domain and find a lightweight densely
connected network named DeCoNASNet. We use a hierarchical search strategy to
find the best connection with local and global features. In this process, we
define a complexity-based penalty for solving image super-resolution, which can
be considered a multi-objective problem. Experiments show that our DeCoNASNet
outperforms the state-of-the-art lightweight super-resolution networks designed
by handcraft methods and existing NAS-based design.
Related papers
- G-EvoNAS: Evolutionary Neural Architecture Search Based on Network
Growth [6.712149832731174]
This paper proposes a computationally efficient neural architecture evolutionary search framework based on network growth (G-EvoNAS)
The G-EvoNAS is tested on three commonly used image classification datasets, CIFAR10, CIFAR100, and ImageNet.
Experimental results demonstrate that G-EvoNAS can find a neural network architecture comparable to state-of-the-art designs in 0.2 GPU days.
arXiv Detail & Related papers (2024-03-05T05:44:38Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Single Cell Training on Architecture Search for Image Denoising [16.72206392993489]
We re-frame the optimal search problem by focusing at component block level.
In addition, we integrate an innovative dimension matching modules for dealing with spatial and channel-wise mismatch.
Our proposed Denoising Prior Neural Architecture Search (DPNAS) was demonstrated by having it complete an optimal architecture search for an image restoration task by just one day with a single GPU.
arXiv Detail & Related papers (2022-12-13T04:47:24Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Searching Efficient Model-guided Deep Network for Image Denoising [61.65776576769698]
We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T14:03:01Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - WDN: A Wide and Deep Network to Divide-and-Conquer Image
Super-resolution [0.0]
Divide and conquer is an established algorithm design paradigm that has proven itself to solve a variety of problems efficiently.
We propose an approach to divide the problem of image super-resolution into multiple sub-problems and then solve/conquer them with the help of a neural network.
arXiv Detail & Related papers (2020-10-07T06:15:11Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Neural Architecture Search as Sparse Supernet [78.09905626281046]
This paper aims at enlarging the problem of Neural Architecture Search (NAS) from Single-Path and Multi-Path Search to automated Mixed-Path Search.
We model the NAS problem as a sparse supernet using a new continuous architecture representation with a mixture of sparsity constraints.
The sparse supernet enables us to automatically achieve sparsely-mixed paths upon a compact set of nodes.
arXiv Detail & Related papers (2020-07-31T14:51:52Z) - DCNAS: Densely Connected Neural Architecture Search for Semantic Image
Segmentation [44.46852065566759]
We propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information.
Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs.
We demonstrate that the architecture obtained from our DCNAS algorithm achieves state-of-the-art performances on public semantic image segmentation benchmarks.
arXiv Detail & Related papers (2020-03-26T13:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.