StarNet: towards Weakly Supervised Few-Shot Object Detection
- URL: http://arxiv.org/abs/2003.06798v3
- Date: Thu, 17 Sep 2020 11:37:25 GMT
- Title: StarNet: towards Weakly Supervised Few-Shot Object Detection
- Authors: Leonid Karlinsky and Joseph Shtok and Amit Alfassy and Moshe
Lichtenstein and Sivan Harary and Eli Schwartz and Sivan Doveh and Prasanna
Sattigeri and Rogerio Feris and Alexander Bronstein and Raja Giryes
- Abstract summary: We introduce StarNet - a few-shot model featuring an end-to-end differentiable non-parametric star-model detection and classification head.
Through this head, the backbone is meta-trained using only image-level labels to produce good features for jointly localizing and classifying previously unseen categories of few-shot test tasks.
Being a few-shot detector, StarNet does not require any bounding box annotations, neither during pre-training nor for novel classes adaptation.
- Score: 87.80771067891418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot detection and classification have advanced significantly in recent
years. Yet, detection approaches require strong annotation (bounding boxes)
both for pre-training and for adaptation to novel classes, and classification
approaches rarely provide localization of objects in the scene. In this paper,
we introduce StarNet - a few-shot model featuring an end-to-end differentiable
non-parametric star-model detection and classification head. Through this head,
the backbone is meta-trained using only image-level labels to produce good
features for jointly localizing and classifying previously unseen categories of
few-shot test tasks using a star-model that geometrically matches between the
query and support images (to find corresponding object instances). Being a
few-shot detector, StarNet does not require any bounding box annotations,
neither during pre-training nor for novel classes adaptation. It can thus be
applied to the previously unexplored and challenging task of Weakly Supervised
Few-Shot Object Detection (WS-FSOD), where it attains significant improvements
over the baselines. In addition, StarNet shows significant gains on few-shot
classification benchmarks that are less cropped around the objects (where
object localization is key).
Related papers
- Identification of Novel Classes for Improving Few-Shot Object Detection [12.013345715187285]
Few-shot object detection (FSOD) methods offer a remedy by realizing robust object detection using only a few training samples per class.
We develop a semi-supervised algorithm to detect and then utilize unlabeled novel objects as positive samples during training to improve FSOD performance.
Our experimental results indicate that our method is effective and outperforms the existing state-of-the-art (SOTA) FSOD methods.
arXiv Detail & Related papers (2023-03-18T14:12:52Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Meta Faster R-CNN: Towards Accurate Few-Shot Object Detection with
Attentive Feature Alignment [33.446875089255876]
Few-shot object detection (FSOD) aims to detect objects using only few examples.
We propose a meta-learning based few-shot object detection method by transferring meta-knowledge learned from data-abundant base classes to data-scarce novel classes.
arXiv Detail & Related papers (2021-04-15T19:01:27Z) - Closing the Generalization Gap in One-Shot Object Detection [92.82028853413516]
We show that the key to strong few-shot detection models may not lie in sophisticated metric learning approaches, but instead in scaling the number of categories.
Future data annotation efforts should therefore focus on wider datasets and annotate a larger number of categories.
arXiv Detail & Related papers (2020-11-09T09:31:17Z) - Cross-Supervised Object Detection [42.783400918552765]
We show how to build better object detectors from weakly labeled images of new categories by leveraging knowledge learned from fully labeled base categories.
We propose a unified framework that combines a detection head trained from instance-level annotations and a recognition head learned from image-level annotations.
arXiv Detail & Related papers (2020-06-26T15:33:48Z) - Exploring Bottom-up and Top-down Cues with Attentive Learning for Webly
Supervised Object Detection [76.9756607002489]
We propose a novel webly supervised object detection (WebSOD) method for novel classes.
Our proposed method combines bottom-up and top-down cues for novel class detection.
We demonstrate our proposed method on PASCAL VOC dataset with three different novel/base splits.
arXiv Detail & Related papers (2020-03-22T03:11:24Z) - Any-Shot Object Detection [81.88153407655334]
'Any-shot detection' is where totally unseen and few-shot categories can simultaneously co-occur during inference.
We propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes.
Our framework can also be used solely for Zero-shot detection and Few-shot detection tasks.
arXiv Detail & Related papers (2020-03-16T03:43:15Z) - Incremental Few-Shot Object Detection [96.02543873402813]
OpeN-ended Centre nEt is a detector for incrementally learning to detect class objects with few examples.
ONCE fully respects the incremental learning paradigm, with novel class registration requiring only a single forward pass of few-shot training samples.
arXiv Detail & Related papers (2020-03-10T12:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.