Incremental Few-Shot Object Detection via Simple Fine-Tuning Approach
- URL: http://arxiv.org/abs/2302.09779v1
- Date: Mon, 20 Feb 2023 05:48:46 GMT
- Title: Incremental Few-Shot Object Detection via Simple Fine-Tuning Approach
- Authors: Tae-Min Choi, Jong-Hwan Kim
- Abstract summary: iFSD incrementally learns novel classes using only a few examples without revisiting base classes.
We propose a simple fine-tuning-based approach, the Incremental Two-stage Fine-tuning Approach (iTFA) for iFSD.
iTFA achieves competitive performance in COCO and shows a 30% higher AP accuracy than meta-learning methods in the LVIS dataset.
- Score: 6.808112517338073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore incremental few-shot object detection (iFSD), which
incrementally learns novel classes using only a few examples without revisiting
base classes. Previous iFSD works achieved the desired results by applying
meta-learning. However, meta-learning approaches show insufficient performance
that is difficult to apply to practical problems. In this light, we propose a
simple fine-tuning-based approach, the Incremental Two-stage Fine-tuning
Approach (iTFA) for iFSD, which contains three steps: 1) base training using
abundant base classes with the class-agnostic box regressor, 2) separation of
the RoI feature extractor and classifier into the base and novel class branches
for preserving base knowledge, and 3) fine-tuning the novel branch using only a
few novel class examples. We evaluate our iTFA on the real-world datasets
PASCAL VOC, COCO, and LVIS. iTFA achieves competitive performance in COCO and
shows a 30% higher AP accuracy than meta-learning methods in the LVIS dataset.
Experimental results show the effectiveness and applicability of our proposed
method.
Related papers
- UIFormer: A Unified Transformer-based Framework for Incremental Few-Shot Object Detection and Instance Segmentation [38.331860053615955]
This paper introduces a novel framework for unified incremental few-shot object detection (iFSOD) and instance segmentation (iFSIS) using the Transformer architecture.
Our goal is to create an optimal solution for situations where only a few examples of novel object classes are available.
arXiv Detail & Related papers (2024-11-13T12:29:44Z) - SMILe: Leveraging Submodular Mutual Information For Robust Few-Shot Object Detection [2.0755366440393743]
Confusion and forgetting of object classes have been challenges of prime interest in Few-Shot Object Detection (FSOD)
We introduce a novel Submodular Mutual Information Learning framework which adopts mutual information functions.
Our proposed approach generalizes to several existing approaches in FSOD, agnostic of the backbone architecture.
arXiv Detail & Related papers (2024-07-02T20:53:43Z) - DST-Det: Simple Dynamic Self-Training for Open-Vocabulary Object Detection [72.25697820290502]
This work introduces a straightforward and efficient strategy to identify potential novel classes through zero-shot classification.
We refer to this approach as the self-training strategy, which enhances recall and accuracy for novel classes without requiring extra annotations, datasets, and re-training.
Empirical evaluations on three datasets, including LVIS, V3Det, and COCO, demonstrate significant improvements over the baseline performance.
arXiv Detail & Related papers (2023-10-02T17:52:24Z) - Strong Baselines for Parameter Efficient Few-Shot Fine-tuning [50.83426196335385]
Few-shot classification (FSC) entails learning novel classes given only a few examples per class after a pre-training (or meta-training) phase.
Recent works have shown that simply fine-tuning a pre-trained Vision Transformer (ViT) on new test classes is a strong approach for FSC.
Fine-tuning ViTs, however, is expensive in time, compute and storage.
This has motivated the design of parameter efficient fine-tuning (PEFT) methods which fine-tune only a fraction of the Transformer's parameters.
arXiv Detail & Related papers (2023-04-04T16:14:39Z) - Fast Hierarchical Learning for Few-Shot Object Detection [57.024072600597464]
Transfer learning approaches have recently achieved promising results on the few-shot detection task.
These approaches suffer from catastrophic forgetting'' issue due to finetuning of base detector.
We tackle the aforementioned issues in this work.
arXiv Detail & Related papers (2022-10-10T20:31:19Z) - Sylph: A Hypernetwork Framework for Incremental Few-shot Object
Detection [8.492340530784697]
We show that finetune-free iFSD can be highly effective when a large number of base categories with abundant data are available for meta-training.
We benchmark our model on both COCO and LVIS, reporting as high as $17%$ AP on the long-tail rare classes on LVIS.
arXiv Detail & Related papers (2022-03-25T20:39:00Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.