Label, Verify, Correct: A Simple Few Shot Object Detection Method
- URL: http://arxiv.org/abs/2112.05749v1
- Date: Fri, 10 Dec 2021 18:59:06 GMT
- Title: Label, Verify, Correct: A Simple Few Shot Object Detection Method
- Authors: Prannay Kaul, Weidi Xie, Andrew Zisserman
- Abstract summary: We introduce a simple pseudo-labelling method to source high-quality pseudo-annotations from a training set.
We present two novel methods to improve the precision of the pseudo-labelling process.
Our method achieves state-of-the-art or second-best performance compared to existing approaches.
- Score: 93.84801062680786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The objective of this paper is few-shot object detection (FSOD) -- the task
of expanding an object detector for a new category given only a few instances
for training. We introduce a simple pseudo-labelling method to source
high-quality pseudo-annotations from the training set, for each new category,
vastly increasing the number of training instances and reducing class
imbalance; our method finds previously unlabelled instances. Na\"ively training
with model predictions yields sub-optimal performance; we present two novel
methods to improve the precision of the pseudo-labelling process: first, we
introduce a verification technique to remove candidate detections with
incorrect class labels; second, we train a specialised model to correct poor
quality bounding boxes. After these two novel steps, we obtain a large set of
high-quality pseudo-annotations that allow our final detector to be trained
end-to-end. Additionally, we demonstrate our method maintains base class
performance, and the utility of simple augmentations in FSOD. While
benchmarking on PASCAL VOC and MS-COCO, our method achieves state-of-the-art or
second-best performance compared to existing approaches across all number of
shots.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Plug-and-Play Few-shot Object Detection with Meta Strategy and Explicit
Localization Inference [78.41932738265345]
This paper proposes a plug detector that can accurately detect the objects of novel categories without fine-tuning process.
We introduce two explicit inferences into the localization process to reduce its dependence on annotated data.
It shows a significant lead in both efficiency, precision, and recall under varied evaluation protocols.
arXiv Detail & Related papers (2021-10-26T03:09:57Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Frustratingly Simple Few-Shot Object Detection [98.42824677627581]
We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task.
Such a simple approach outperforms the meta-learning methods by roughly 220 points on current benchmarks.
arXiv Detail & Related papers (2020-03-16T00:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.