ISTR: End-to-End Instance Segmentation with Transformers
- URL: http://arxiv.org/abs/2105.00637v2
- Date: Thu, 6 May 2021 03:10:33 GMT
- Title: ISTR: End-to-End Instance Segmentation with Transformers
- Authors: Jie Hu, Liujuan Cao, Yao Lu, ShengChuan Zhang, Yan Wang, Ke Li, Feiyue
Huang, Ling Shao, Rongrong Ji
- Abstract summary: We propose an instance segmentation Transformer, termed ISTR, which is the first end-to-end framework of its kind.
ISTR predicts low-dimensional mask embeddings, and matches them with ground truth mask embeddings for the set loss.
Benefiting from the proposed end-to-end mechanism, ISTR demonstrates state-of-the-art performance even with approximation-based suboptimal embeddings.
- Score: 147.14073165997846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: End-to-end paradigms significantly improve the accuracy of various
deep-learning-based computer vision models. To this end, tasks like object
detection have been upgraded by replacing non-end-to-end components, such as
removing non-maximum suppression by training with a set loss based on bipartite
matching. However, such an upgrade is not applicable to instance segmentation,
due to its significantly higher output dimensions compared to object detection.
In this paper, we propose an instance segmentation Transformer, termed ISTR,
which is the first end-to-end framework of its kind. ISTR predicts
low-dimensional mask embeddings, and matches them with ground truth mask
embeddings for the set loss. Besides, ISTR concurrently conducts detection and
segmentation with a recurrent refinement strategy, which provides a new way to
achieve instance segmentation compared to the existing top-down and bottom-up
frameworks. Benefiting from the proposed end-to-end mechanism, ISTR
demonstrates state-of-the-art performance even with approximation-based
suboptimal embeddings. Specifically, ISTR obtains a 46.8/38.6 box/mask AP using
ResNet50-FPN, and a 48.1/39.9 box/mask AP using ResNet101-FPN, on the MS COCO
dataset. Quantitative and qualitative results reveal the promising potential of
ISTR as a solid baseline for instance-level recognition. Code has been made
available at: https://github.com/hujiecpp/ISTR.
Related papers
- Bridge the Points: Graph-based Few-shot Segment Anything Semantically [79.1519244940518]
Recent advancements in pre-training techniques have enhanced the capabilities of vision foundation models.
Recent studies extend the SAM to Few-shot Semantic segmentation (FSS)
We propose a simple yet effective approach based on graph analysis.
arXiv Detail & Related papers (2024-10-09T15:02:28Z) - SRFormer: Text Detection Transformer with Incorporated Segmentation and
Regression [6.74412860849373]
We propose SRFormer, a unified DETR-based model with amalgamated and Regression.
Our empirical analysis indicates that favorable segmentation predictions can be obtained at the initial decoder layers.
Our method's exceptional robustness, superior training and data efficiency, as well as its state-of-the-art performance.
arXiv Detail & Related papers (2023-08-21T07:34:31Z) - Adaptive Spot-Guided Transformer for Consistent Local Feature Matching [64.30749838423922]
We propose Adaptive Spot-Guided Transformer (ASTR) for local feature matching.
ASTR models the local consistency and scale variations in a unified coarse-to-fine architecture.
arXiv Detail & Related papers (2023-03-29T12:28:01Z) - UniInst: Unique Representation for End-to-End Instance Segmentation [29.974973664317485]
We propose a box-free and NMS-free end-to-end instance segmentation framework, termed UniInst.
Specifically, we design an instance-aware one-to-one assignment scheme, which dynamically assigns one unique representation to each instance.
With these techniques, our UniInst, the first FCN-based end-to-end instance segmentation framework, achieves competitive performance.
arXiv Detail & Related papers (2022-05-25T10:40:26Z) - Real-Time Scene Text Detection with Differentiable Binarization and
Adaptive Scale Fusion [62.269219152425556]
segmentation-based scene text detection methods have drawn extensive attention in the scene text detection field.
We propose a Differentiable Binarization (DB) module that integrates the binarization process into a segmentation network.
An efficient Adaptive Scale Fusion (ASF) module is proposed to improve the scale robustness by fusing features of different scales adaptively.
arXiv Detail & Related papers (2022-02-21T15:30:14Z) - Semantic Attention and Scale Complementary Network for Instance
Segmentation in Remote Sensing Images [54.08240004593062]
We propose an end-to-end multi-category instance segmentation model, which consists of a Semantic Attention (SEA) module and a Scale Complementary Mask Branch (SCMB)
SEA module contains a simple fully convolutional semantic segmentation branch with extra supervision to strengthen the activation of interest instances on the feature map.
SCMB extends the original single mask branch to trident mask branches and introduces complementary mask supervision at different scales.
arXiv Detail & Related papers (2021-07-25T08:53:59Z) - End-to-End Object Detection with Transformers [88.06357745922716]
We present a new method that views object detection as a direct set prediction problem.
Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components.
The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss.
arXiv Detail & Related papers (2020-05-26T17:06:38Z) - Weakly Supervised Instance Segmentation by Deep Community Learning [39.18749732409763]
We present a weakly supervised instance segmentation algorithm based on deep community learning with multiple tasks.
We address this problem by designing a unified deep neural network architecture.
The proposed algorithm achieves state-of-the-art performance in the weakly supervised setting.
arXiv Detail & Related papers (2020-01-30T08:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.