Scaling Wide Residual Networks for Panoptic Segmentation
- URL: http://arxiv.org/abs/2011.11675v2
- Date: Mon, 8 Feb 2021 04:07:27 GMT
- Title: Scaling Wide Residual Networks for Panoptic Segmentation
- Authors: Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao
- Abstract summary: Wide Residual Networks (Wide-ResNets) are a shallow but wide model variant of the Residual Networks (ResNets)
We revisit its architecture design for the recent challenging panoptic segmentation task, which aims to unify semantic segmentation and instance segmentation.
We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime.
- Score: 29.303735643858026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Wide Residual Networks (Wide-ResNets), a shallow but wide model variant
of the Residual Networks (ResNets) by stacking a small number of residual
blocks with large channel sizes, have demonstrated outstanding performance on
multiple dense prediction tasks. However, since proposed, the Wide-ResNet
architecture has barely evolved over the years. In this work, we revisit its
architecture design for the recent challenging panoptic segmentation task,
which aims to unify semantic segmentation and instance segmentation. A baseline
model is obtained by incorporating the simple and effective
Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets.
Its network capacity is further scaled up or down by adjusting the width (i.e.,
channel size) and depth (i.e., number of layers), resulting in a family of
SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such
a simple scaling scheme, coupled with grid search, identifies several
SWideRNets that significantly advance state-of-the-art performance on panoptic
segmentation datasets in both the fast model regime and strong model regime.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - MFPNet: Multi-scale Feature Propagation Network For Lightweight Semantic
Segmentation [5.58363644107113]
We propose a novel lightweight segmentation architecture, called Multi-scale Feature Propagation Network (Net)
We design a robust-Decoder structure featuring symmetrical residual blocks that consist of flexible bottleneck residual modules (BRMs)
Taking benefit of their capacity to model latent long-range contextual relationships, we leverage Graph Convolutional Networks (GCNs) to facilitate multiscale feature propagation between the BRM blocks.
arXiv Detail & Related papers (2023-09-10T02:02:29Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Pooling Revisited: Your Receptive Field is Suboptimal [35.11562214480459]
The size and shape of the receptive field determine how the network aggregates local information.
We propose a simple yet effective Dynamically Optimized Pooling operation, referred to as DynOPool.
Our experiments show that the models equipped with the proposed learnable resizing module outperform the baseline networks on multiple datasets in image classification and semantic segmentation.
arXiv Detail & Related papers (2022-05-30T17:03:40Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Asymptotics of Wide Convolutional Neural Networks [18.198962344790377]
We study scaling laws for wide CNNs and networks with skip connections.
We find that the difference in performance between finite and infinite width models vanishes at a definite rate with respect to model width.
arXiv Detail & Related papers (2020-08-19T21:22:19Z) - OverNet: Lightweight Multi-Scale Super-Resolution with Overscaling
Network [3.6683231417848283]
We introduce OverNet, a deep but lightweight convolutional network to solve SISR at arbitrary scale factors with a single model.
We show that our network outperforms previous state-of-the-art results in standard benchmarks while using fewer parameters than previous approaches.
arXiv Detail & Related papers (2020-08-05T22:10:29Z) - Multigrid-in-Channels Architectures for Wide Convolutional Neural
Networks [6.929025509877642]
We present a multigrid approach that combats the quadratic growth of the number of parameters with respect to the number of channels in standard convolutional neural networks (CNNs)
Our examples from supervised image classification show that applying this strategy to residual networks and MobileNetV2 considerably reduces the number of parameters without negatively affecting accuracy.
arXiv Detail & Related papers (2020-06-11T20:28:36Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Temporally Distributed Networks for Fast Video Semantic Segmentation [64.5330491940425]
TDNet is a temporally distributed network designed for fast and accurate video semantic segmentation.
We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks.
Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency.
arXiv Detail & Related papers (2020-04-03T22:43:32Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.