The Devil is in the Boundary: Exploiting Boundary Representation for
Basis-based Instance Segmentation
- URL: http://arxiv.org/abs/2011.13241v1
- Date: Thu, 26 Nov 2020 11:26:06 GMT
- Title: The Devil is in the Boundary: Exploiting Boundary Representation for
Basis-based Instance Segmentation
- Authors: Myungchul Kim, Sanghyun Woo, Dahun Kim, and In So Kweon
- Abstract summary: We propose Basis based Instance(B2Inst) to learn a global boundary representation that can complement existing global-mask-based methods.
Our B2Inst leads to consistent improvements and accurately parses out the instance boundaries in a scene.
- Score: 85.153426159438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pursuing a more coherent scene understanding towards real-time vision
applications, single-stage instance segmentation has recently gained
popularity, achieving a simpler and more efficient design than its two-stage
counterparts. Besides, its global mask representation often leads to superior
accuracy to the two-stage Mask R-CNN which has been dominant thus far. Despite
the promising advances in single-stage methods, finer delineation of instance
boundaries still remains unexcavated. Indeed, boundary information provides a
strong shape representation that can operate in synergy with the
fully-convolutional mask features of the single-stage segmenter. In this work,
we propose Boundary Basis based Instance Segmentation(B2Inst) to learn a global
boundary representation that can complement existing global-mask-based methods
that are often lacking high-frequency details. Besides, we devise a unified
quality measure of both mask and boundary and introduce a network block that
learns to score the per-instance predictions of itself. When applied to the
strongest baselines in single-stage instance segmentation, our B2Inst leads to
consistent improvements and accurately parse out the instance boundaries in a
scene. Regardless of being single-stage or two-stage frameworks, we outperform
the existing state-of-the-art methods on the COCO dataset with the same
ResNet-50 and ResNet-101 backbones.
Related papers
- BLADE: Box-Level Supervised Amodal Segmentation through Directed
Expansion [10.57956193654977]
Box-level supervised amodal segmentation addresses this challenge by relying solely on ground truth bounding boxes and instance classes as supervision.
We present a novel solution by introducing a directed expansion approach from visible masks to corresponding amodal masks.
Our approach involves a hybrid end-to-end network based on the overlapping region - the area where different instances intersect.
arXiv Detail & Related papers (2024-01-03T09:37:03Z) - Exploiting Shape Cues for Weakly Supervised Semantic Segmentation [15.791415215216029]
Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training.
We propose to exploit shape information to supplement the texture-biased property of convolutional neural networks (CNNs)
We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities.
arXiv Detail & Related papers (2022-08-08T17:25:31Z) - Semantic Attention and Scale Complementary Network for Instance
Segmentation in Remote Sensing Images [54.08240004593062]
We propose an end-to-end multi-category instance segmentation model, which consists of a Semantic Attention (SEA) module and a Scale Complementary Mask Branch (SCMB)
SEA module contains a simple fully convolutional semantic segmentation branch with extra supervision to strengthen the activation of interest instances on the feature map.
SCMB extends the original single mask branch to trident mask branches and introduces complementary mask supervision at different scales.
arXiv Detail & Related papers (2021-07-25T08:53:59Z) - BoundarySqueeze: Image Segmentation as Boundary Squeezing [104.43159799559464]
We propose a novel method for fine-grained high-quality image segmentation of both objects and scenes.
Inspired by dilation and erosion from morphological image processing techniques, we treat the pixel level segmentation problems as squeezing object boundary.
Our method yields large gains on COCO, Cityscapes, for both instance and semantic segmentation and outperforms previous state-of-the-art PointRend in both accuracy and speed under the same setting.
arXiv Detail & Related papers (2021-05-25T04:58:51Z) - Look Closer to Segment Better: Boundary Patch Refinement for Instance
Segmentation [51.59290734837372]
We propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality.
The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark.
By applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard.
arXiv Detail & Related papers (2021-04-12T07:10:48Z) - Mask Encoding for Single Shot Instance Segmentation [97.99956029224622]
We propose a simple singleshot instance segmentation framework, termed mask encoding based instance segmentation (MEInst)
Instead of predicting the two-dimensional mask directly, MEInst distills it into a compact and fixed-dimensional representation vector.
We show that the much simpler and flexible one-stage instance segmentation method, can also achieve competitive performance.
arXiv Detail & Related papers (2020-03-26T02:51:17Z) - PointINS: Point-based Instance Segmentation [117.38579097923052]
Mask representation in instance segmentation with Point-of-Interest (PoI) features is challenging because learning a high-dimensional mask feature for each instance requires a heavy computing burden.
We propose an instance-aware convolution, which decomposes this mask representation learning task into two tractable modules.
Along with instance-aware convolution, we propose PointINS, a simple and practical instance segmentation approach.
arXiv Detail & Related papers (2020-03-13T08:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.