MINI: Mining Implicit Novel Instances for Few-Shot Object Detection
- URL: http://arxiv.org/abs/2205.03381v1
- Date: Fri, 6 May 2022 17:26:48 GMT
- Title: MINI: Mining Implicit Novel Instances for Few-Shot Object Detection
- Authors: Yuhang Cao, Jiaqi Wang, Yiqi Lin, Dahua Lin
- Abstract summary: Mining Implicit Novel Instances (MINI) is a novel framework to mine implicit novel instances as auxiliary training samples.
MINI achieves new state-of-the-art performance on any shot and split.
- Score: 73.5061386065382
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning from a few training samples is a desirable ability of an object
detector, inspiring the explorations of Few-Shot Object Detection (FSOD). Most
existing approaches employ a pretrain-transfer paradigm. The model is first
pre-trained on base classes with abundant data and then transferred to novel
classes with a few annotated samples. Despite the substantial progress, the
FSOD performance is still far behind satisfactory. During pre-training, due to
the co-occurrence between base and novel classes, the model is learned to treat
the co-occurred novel classes as backgrounds. During transferring, given scarce
samples of novel classes, the model suffers from learning discriminative
features to distinguish novel instances from backgrounds and base classes. To
overcome the obstacles, we propose a novel framework, Mining Implicit Novel
Instances (MINI), to mine the implicit novel instances as auxiliary training
samples, which widely exist in abundant base data but are not annotated. MINI
comprises an offline mining mechanism and an online mining mechanism. The
offline mining mechanism leverages a self-supervised discriminative model to
collaboratively mine implicit novel instances with a trained FSOD network.
Taking the mined novel instances as auxiliary training samples, the online
mining mechanism takes a teacher-student framework to simultaneously update the
FSOD network and the mined implicit novel instances on the fly. Extensive
experiments on PASCAL VOC and MS-COCO datasets show MINI achieves new
state-of-the-art performance on any shot and split. The significant performance
improvements demonstrate the superiority of our method.
Related papers
- Open-Vocabulary Object Detection with Meta Prompt Representation and Instance Contrastive Optimization [63.66349334291372]
We propose a framework with Meta prompt and Instance Contrastive learning (MIC) schemes.
Firstly, we simulate a novel-class-emerging scenario to help the prompt that learns class and background prompts generalize to novel classes.
Secondly, we design an instance-level contrastive strategy to promote intra-class compactness and inter-class separation, which benefits generalization of the detector to novel class objects.
arXiv Detail & Related papers (2024-03-14T14:25:10Z) - ProxyDet: Synthesizing Proxy Novel Classes via Classwise Mixup for
Open-Vocabulary Object Detection [7.122652901894367]
Open-vocabulary object detection (OVOD) aims to recognize novel objects whose categories are not included in the training set.
We present a novel, yet simple technique that helps generalization on the overall distribution of novel classes.
arXiv Detail & Related papers (2023-12-12T13:45:56Z) - Memorizing Complementation Network for Few-Shot Class-Incremental
Learning [109.4206979528375]
We propose a Memorizing Complementation Network (MCNet) to ensemble multiple models that complements the different memorized knowledge with each other in novel tasks.
We develop a Prototype Smoothing Hard-mining Triplet (PSHT) loss to push the novel samples away from not only each other in current task but also the old distribution.
arXiv Detail & Related papers (2022-08-11T02:32:41Z) - Demystifying the Base and Novel Performances for Few-shot
Class-incremental Learning [15.762281194023462]
Few-shot class-incremental learning (FSCIL) has addressed challenging real-world scenarios where unseen novel classes continually arrive with few samples.
It is required to develop a model that recognizes the novel classes without forgetting prior knowledge.
It is shown that our straightforward method has comparable performance with the sophisticated state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-18T00:39:47Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Incremental Few-Shot Object Detection [96.02543873402813]
OpeN-ended Centre nEt is a detector for incrementally learning to detect class objects with few examples.
ONCE fully respects the incremental learning paradigm, with novel class registration requiring only a single forward pass of few-shot training samples.
arXiv Detail & Related papers (2020-03-10T12:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.