NAS-based Recursive Stage Partial Network (RSPNet) for Light-Weight
Semantic Segmentation
- URL: http://arxiv.org/abs/2210.00698v1
- Date: Mon, 3 Oct 2022 03:25:29 GMT
- Title: NAS-based Recursive Stage Partial Network (RSPNet) for Light-Weight
Semantic Segmentation
- Authors: Yi-Chun Wang, Jun-Wei Hsieh, Ming-Ching Chang
- Abstract summary: Current NAS-based semantic segmentation methods focus on accuracy improvements rather than light-weight design.
We propose a two-stage framework to design our NAS-based RSPNet model for light-weight semantic segmentation.
The proposed architecture is very efficient, simple, and effective that both the macro- and micro- structure searches can be completed in five days of computation.
- Score: 16.019616787091202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current NAS-based semantic segmentation methods focus on accuracy
improvements rather than light-weight design. In this paper, we proposed a
two-stage framework to design our NAS-based RSPNet model for light-weight
semantic segmentation. The first architecture search determines the inner cell
structure, and the second architecture search considers exponentially growing
paths to finalize the outer structure of the network. It was shown in the
literature that the fusion of high- and low-resolution feature maps produces
stronger representations. To find the expected macro structure without manual
design, we adopt a new path-attention mechanism to efficiently search for
suitable paths to fuse useful information for better segmentation. Our search
for repeatable micro-structures from cells leads to a superior network
architecture in semantic segmentation. In addition, we propose an RSP
(recursive Stage Partial) architecture to search a light-weight design for
NAS-based semantic segmentation. The proposed architecture is very efficient,
simple, and effective that both the macro- and micro- structure searches can be
completed in five days of computation on two V100 GPUs. The light-weight NAS
architecture with only 1/4 parameter size of SoTA architectures can achieve
SoTA performance on semantic segmentation on the Cityscapes dataset without
using any backbones.
Related papers
- Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - Rethinking Architecture Selection in Differentiable NAS [74.61723678821049]
Differentiable Neural Architecture Search is one of the most popular NAS methods for its search efficiency and simplicity.
We propose an alternative perturbation-based architecture selection that directly measures each operation's influence on the supernet.
We find that several failure modes of DARTS can be greatly alleviated with the proposed selection method.
arXiv Detail & Related papers (2021-08-10T00:53:39Z) - Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration [68.6505473346005]
We propose a memory-efficient hierarchical NAS HiNAS (HiNAS) for image denoising and image super-resolution tasks.
With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K.
arXiv Detail & Related papers (2020-12-24T12:06:17Z) - Auto-Panoptic: Cooperative Multi-Component Architecture Search for
Panoptic Segmentation [144.50154657257605]
We propose an efficient framework to simultaneously search for all main components including backbone, segmentation branches, and feature fusion module.
Our searched architecture, namely Auto-Panoptic, achieves the new state-of-the-art on the challenging COCO and ADE20K benchmarks.
arXiv Detail & Related papers (2020-10-30T08:34:35Z) - Smooth Variational Graph Embeddings for Efficient Neural Architecture
Search [41.62970837629573]
We propose a two-sided variational graph autoencoder, which allows to smoothly encode and accurately reconstruct neural architectures from various search spaces.
We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search spaces.
arXiv Detail & Related papers (2020-10-09T17:05:41Z) - Multi-Objective Neural Architecture Search Based on Diverse Structures
and Adaptive Recommendation [4.595675084986132]
The search space of neural architecture search (NAS) for convolutional neural network (CNN) is huge.
We propose MoARR algorithm, which utilizes the existing research results and historical information to quickly find architectures that are both lightweight and accurate.
Experimental results show that our MoARR can achieve a powerful and lightweight model (with 1.9% error rate and 2.3M parameters) on CIFAR-10 in 6 GPU hours.
arXiv Detail & Related papers (2020-07-06T13:42:33Z) - Neural Architecture Optimization with Graph VAE [21.126140965779534]
We propose an efficient NAS approach to optimize network architectures in a continuous space.
The framework jointly learns four components: the encoder, the performance predictor, the complexity predictor and the decoder.
arXiv Detail & Related papers (2020-06-18T07:05:48Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z) - DCNAS: Densely Connected Neural Architecture Search for Semantic Image
Segmentation [44.46852065566759]
We propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information.
Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs.
We demonstrate that the architecture obtained from our DCNAS algorithm achieves state-of-the-art performances on public semantic image segmentation benchmarks.
arXiv Detail & Related papers (2020-03-26T13:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.