EOLO: Embedded Object Segmentation only Look Once
- URL: http://arxiv.org/abs/2004.00123v1
- Date: Tue, 31 Mar 2020 21:22:05 GMT
- Title: EOLO: Embedded Object Segmentation only Look Once
- Authors: Longfei Zeng and Mohammed Sabah
- Abstract summary: We introduce an anchor-free and single-shot instance segmentation method, which is conceptually simple with 3 independent branches, fully convolutional and can be used by easily embedding it into mobile and embedded devices.
Our method, refer as EOLO, reformulates the instance segmentation problem as predicting semantic segmentation and distinguishing overlapping objects problem, through instance center classification and 4D distance regression on each pixel.
Without any bells and whistles, EOLO achieves 27.7$%$ in mask mAP under IoU50 and reaches 30 FPS on 1080Ti GPU, with a single-model and single-scale training/testing on
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce an anchor-free and single-shot instance
segmentation method, which is conceptually simple with 3 independent branches,
fully convolutional and can be used by easily embedding it into mobile and
embedded devices.
Our method, refer as EOLO, reformulates the instance segmentation problem as
predicting semantic segmentation and distinguishing overlapping objects
problem, through instance center classification and 4D distance regression on
each pixel. Moreover, we propose one effective loss function to deal with
sampling a high-quality center of gravity examples and optimization for 4D
distance regression, which can significantly improve the mAP performance.
Without any bells and whistles, EOLO achieves 27.7$\%$ in mask mAP under IoU50
and reaches 30 FPS on 1080Ti GPU, with a single-model and single-scale
training/testing on the challenging COCO2017 dataset.
For the first time, we show the different comprehension of instance
segmentation in recent methods, in terms of both up-bottom, down-up, and
direct-predict paradigms. Then we illustrate our model and present related
experiments and results. We hope that the proposed EOLO framework can serve as
a fundamental baseline for a single-shot instance segmentation task in
Real-time Industrial Scenarios.
Related papers
- Single-Stage Open-world Instance Segmentation with Cross-task
Consistency Regularization [33.434628514542375]
Open-world instance segmentation aims to segment class-agnostic instances from images.
This paper proposes a single-stage framework to produce a mask for each instance directly.
We show that the proposed method can achieve impressive results in both fully-supervised and semi-supervised settings.
arXiv Detail & Related papers (2022-08-18T18:55:09Z) - PointInst3D: Segmenting 3D Instances by Points [136.7261709896713]
We propose a fully-convolutional 3D point cloud instance segmentation method that works in a per-point prediction fashion.
We find the key to its success is assigning a suitable target to each sampled point.
Our approach achieves promising results on both ScanNet and S3DIS benchmarks.
arXiv Detail & Related papers (2022-04-25T02:41:46Z) - Sparse Instance Activation for Real-Time Instance Segmentation [72.23597664935684]
We propose a conceptually novel, efficient, and fully convolutional framework for real-time instance segmentation.
SparseInst has extremely fast inference speed and achieves 40 FPS and 37.9 AP on the COCO benchmark.
arXiv Detail & Related papers (2022-03-24T03:15:39Z) - SOLO: A Simple Framework for Instance Segmentation [84.00519148562606]
"instance categories" assigns categories to each pixel within an instance according to the instance's location.
"SOLO" is a simple, direct, and fast framework for instance segmentation with strong performance.
Our approach achieves state-of-the-art results for instance segmentation in terms of both speed and accuracy.
arXiv Detail & Related papers (2021-06-30T09:56:54Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - INSTA-YOLO: Real-Time Instance Segmentation [2.726684740197893]
We propose Insta-YOLO, a novel one-stage end-to-end deep learning model for real-time instance segmentation.
The proposed model is inspired by the YOLO one-shot object detector, with the box regression loss is replaced with regression in the localization head.
We evaluate our model on three datasets, namely, Carnva, Cityscapes and Airbus.
arXiv Detail & Related papers (2021-02-12T21:17:29Z) - Scaling Semantic Segmentation Beyond 1K Classes on a Single GPU [87.48110331544885]
We propose a novel training methodology to train and scale the existing semantic segmentation models.
We demonstrate a clear benefit of our approach on a dataset with 1284 classes, bootstrapped from LVIS and COCO annotations, with three times better mIoU than the DeeplabV3+ model.
arXiv Detail & Related papers (2020-12-14T13:12:38Z) - The Devil is in Classification: A Simple Framework for Long-tail Object
Detection and Instance Segmentation [93.17367076148348]
We investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset.
We unveil that a major cause is the inaccurate classification of object proposals.
We propose a simple calibration framework to more effectively alleviate classification head bias with a bi-level class balanced sampling approach.
arXiv Detail & Related papers (2020-07-23T12:49:07Z) - Objectness-Aware Few-Shot Semantic Segmentation [31.13009111054977]
We show how to increase overall model capacity to achieve improved performance.
We introduce objectness, which is class-agnostic and so not prone to overfitting.
Given only one annotated example of an unseen category, experiments show that our method outperforms state-of-art methods with respect to mIoU.
arXiv Detail & Related papers (2020-04-06T19:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.