FAPIS: A Few-shot Anchor-free Part-based Instance Segmenter
- URL: http://arxiv.org/abs/2104.00073v1
- Date: Wed, 31 Mar 2021 19:09:43 GMT
- Title: FAPIS: A Few-shot Anchor-free Part-based Instance Segmenter
- Authors: Khoi Nguyen, Sinisa Todorovic
- Abstract summary: We evaluate a new few-shot anchor-free part-based instance segmenter FAPIS.
Key novelty is in explicit modeling of latent object parts shared across training object classes.
Our evaluation on the benchmark COCO-20i dataset demonstrates that we significantly outperform the state of the art.
- Score: 35.103437828235826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper is about few-shot instance segmentation, where training and test
image sets do not share the same object classes. We specify and evaluate a new
few-shot anchor-free part-based instance segmenter FAPIS. Our key novelty is in
explicit modeling of latent object parts shared across training object classes,
which is expected to facilitate our few-shot learning on new classes in
testing. We specify a new anchor-free object detector aimed at scoring and
regressing locations of foreground bounding boxes, as well as estimating
relative importance of latent parts within each box. Also, we specify a new
network for delineating and weighting latent parts for the final instance
segmentation within every detected bounding box. Our evaluation on the
benchmark COCO-20i dataset demonstrates that we significantly outperform the
state of the art.
Related papers
- ISAR: A Benchmark for Single- and Few-Shot Object Instance Segmentation
and Re-Identification [24.709695178222862]
We propose ISAR, a benchmark and baseline method for single- and few-shot object identification.
We provide a semi-synthetic dataset of video sequences with ground-truth semantic annotations.
Our benchmark aligns with the emerging research trend of unifying Multi-Object Tracking, Video Object, and Re-identification.
arXiv Detail & Related papers (2023-11-05T18:51:33Z) - Fine-grained Few-shot Recognition by Deep Object Parsing [43.61794876834115]
We parse a test instance by inferring the K parts, where each part occupies a distinct location in the feature space.
We recognize test instances by comparing its active templates and the relative geometry of its part locations.
arXiv Detail & Related papers (2022-07-14T17:59:05Z) - iFS-RCNN: An Incremental Few-shot Instance Segmenter [39.79912546252623]
We make two contributions by extending the common Mask-RCNN framework in its second stage.
We specify a new object class classifier based on the probit function and a new uncertainty-guided bounding-box predictor.
Our contributions produce significant performance gains on the COCO dataset over the state of the art.
arXiv Detail & Related papers (2022-05-31T06:42:03Z) - Learning with Free Object Segments for Long-Tailed Instance Segmentation [15.563842274862314]
We find that an abundance of instance segments can potentially be obtained freely from object-centric im-ages.
Motivated by these insights, we propose FreeSeg for extracting and leveraging these "free" object segments.
FreeSeg achieves state-of-the-art accuracy for segmenting rare object categories.
arXiv Detail & Related papers (2022-02-22T19:06:16Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Object-Guided Instance Segmentation With Auxiliary Feature Refinement
for Biological Images [58.914034295184685]
Instance segmentation is of great importance for many biological applications, such as study of neural cell interactions, plant phenotyping, and quantitatively measuring how cells react to drug treatment.
Box-based instance segmentation methods capture objects via bounding boxes and then perform individual segmentation within each bounding box region.
Our method first detects the center points of the objects, from which the bounding box parameters are then predicted.
The segmentation branch reuses the object features as guidance to separate target object from the neighboring ones within the same bounding box region.
arXiv Detail & Related papers (2021-06-14T04:35:36Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Pointly-Supervised Instance Segmentation [81.34136519194602]
We propose point-based instance-level annotation, a new form of weak supervision for instance segmentation.
It combines the standard bounding box annotation with labeled points that are uniformly sampled inside each bounding box.
In our experiments, Mask R-CNN models trained on COCO, PASCAL VOC, Cityscapes, and LVIS with only 10 annotated points per object achieve 94%--98% of their fully-supervised performance.
arXiv Detail & Related papers (2021-04-13T17:59:40Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.