DCNAS: Densely Connected Neural Architecture Search for Semantic Image
Segmentation
- URL: http://arxiv.org/abs/2003.11883v2
- Date: Sat, 27 Mar 2021 15:03:47 GMT
- Title: DCNAS: Densely Connected Neural Architecture Search for Semantic Image
Segmentation
- Authors: Xiong Zhang, Hongmin Xu, Hong Mo, Jianchao Tan, Cheng Yang, Lei Wang,
Wenqi Ren
- Abstract summary: We propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information.
Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs.
We demonstrate that the architecture obtained from our DCNAS algorithm achieves state-of-the-art performances on public semantic image segmentation benchmarks.
- Score: 44.46852065566759
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Architecture Search (NAS) has shown great potentials in automatically
designing scalable network architectures for dense image predictions. However,
existing NAS algorithms usually compromise on restricted search space and
search on proxy task to meet the achievable computational demands. To allow as
wide as possible network architectures and avoid the gap between target and
proxy dataset, we propose a Densely Connected NAS (DCNAS) framework, which
directly searches the optimal network structures for the multi-scale
representations of visual information, over a large-scale target dataset.
Specifically, by connecting cells with each other using learnable weights, we
introduce a densely connected search space to cover an abundance of mainstream
network designs. Moreover, by combining both path-level and channel-level
sampling strategies, we design a fusion module to reduce the memory consumption
of ample search space. We demonstrate that the architecture obtained from our
DCNAS algorithm achieves state-of-the-art performances on public semantic image
segmentation benchmarks, including 84.3% on Cityscapes, and 86.9% on PASCAL VOC
2012. We also retain leading performances when evaluating the architecture on
the more challenging ADE20K and Pascal Context dataset.
Related papers
- GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - A General-Purpose Transferable Predictor for Neural Architecture Search [22.883809911265445]
We propose a general-purpose neural predictor for Neural Architecture Search (NAS) that can transfer across search spaces.
Experimental results on NAS-Bench-101, 201 and 301 demonstrate the efficacy of our scheme.
arXiv Detail & Related papers (2023-02-21T17:28:05Z) - NAS-based Recursive Stage Partial Network (RSPNet) for Light-Weight
Semantic Segmentation [16.019616787091202]
Current NAS-based semantic segmentation methods focus on accuracy improvements rather than light-weight design.
We propose a two-stage framework to design our NAS-based RSPNet model for light-weight semantic segmentation.
The proposed architecture is very efficient, simple, and effective that both the macro- and micro- structure searches can be completed in five days of computation.
arXiv Detail & Related papers (2022-10-03T03:25:29Z) - Towards Less Constrained Macro-Neural Architecture Search [2.685668802278155]
Neural Architecture Search (NAS) networks achieve state-of-the-art performance in a variety of tasks.
Most NAS methods rely heavily on human-defined assumptions that constrain the search.
We present experiments showing that LCMNAS generates state-of-the-art architectures from scratch with minimal GPU computation.
arXiv Detail & Related papers (2022-03-10T17:53:03Z) - HyperSegNAS: Bridging One-Shot Neural Architecture Search with 3D
Medical Image Segmentation using HyperNet [51.60655410423093]
We introduce HyperSegNAS to enable one-shot Neural Architecture Search (NAS) for medical image segmentation.
We show that HyperSegNAS yields better performing and more intuitive architectures compared to the previous state-of-the-art (SOTA) segmentation networks.
Our method is evaluated on public datasets from the Medical Decathlon (MSD) challenge, and achieves SOTA performances.
arXiv Detail & Related papers (2021-12-20T16:21:09Z) - Searching Efficient Model-guided Deep Network for Image Denoising [61.65776576769698]
We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T14:03:01Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - AutoPose: Searching Multi-Scale Branch Aggregation for Pose Estimation [96.29533512606078]
We present AutoPose, a novel neural architecture search(NAS) framework.
It is capable of automatically discovering multiple parallel branches of cross-scale connections towards accurate and high-resolution 2D human pose estimation.
arXiv Detail & Related papers (2020-08-16T22:27:43Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.