Class-Incremental Few-Shot Object Detection
- URL: http://arxiv.org/abs/2105.07637v1
- Date: Mon, 17 May 2021 06:49:29 GMT
- Title: Class-Incremental Few-Shot Object Detection
- Authors: Pengyang Li, Yanan Li and Donghui Wang
- Abstract summary: This paper focuses on a more challenging but realistic class-incremental few-shot object detection problem (iFSD)
It aims to incrementally transfer the model for novel objects from only a few annotated samples without catastrophically forgetting the previously learned ones.
We propose a novel method LEAST, which can transfer with Less forgetting, fEwer training resources, And Stronger Transfer capability.
- Score: 8.569278547520438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional detection networks usually need abundant labeled training
samples, while humans can learn new concepts incrementally with just a few
examples. This paper focuses on a more challenging but realistic
class-incremental few-shot object detection problem (iFSD). It aims to
incrementally transfer the model for novel objects from only a few annotated
samples without catastrophically forgetting the previously learned ones. To
tackle this problem, we propose a novel method LEAST, which can transfer with
Less forgetting, fEwer training resources, And Stronger Transfer capability.
Specifically, we first present the transfer strategy to reduce unnecessary
weight adaptation and improve the transfer capability for iFSD. On this basis,
we then integrate the knowledge distillation technique using a less
resource-consuming approach to alleviate forgetting and propose a novel
clustering-based exemplar selection process to preserve more discriminative
features previously learned. Being a generic and effective method, LEAST can
largely improve the iFSD performance on various benchmarks.
Related papers
- Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Squeezing Backbone Feature Distributions to the Max for Efficient
Few-Shot Learning [3.1153758106426603]
Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples.
We propose a novel transfer-based method which aims at processing the feature vectors so that they become closer to Gaussian-like distributions.
In the case of transductive few-shot learning where unlabelled test samples are available during training, we also introduce an optimal-transport inspired algorithm to boost even further the achieved performance.
arXiv Detail & Related papers (2021-10-18T16:29:17Z) - Towards Generalized and Incremental Few-Shot Object Detection [9.033533653482529]
A novel Incremental Few-Shot Object Detection (iFSOD) method is proposed to enable the effective continual learning from few-shot samples.
Specifically, a Double-Branch Framework (DBF) is proposed to decouple the feature representation of base and novel (few-shot) class.
We conduct experiments on both Pascal VOC and MS-COCO, which demonstrate that our method can effectively solve the problem of incremental few-shot detection.
arXiv Detail & Related papers (2021-09-23T12:38:09Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - Sample-based Regularization: A Transfer Learning Strategy Toward Better
Generalization [8.432864879027724]
Training a deep neural network with a small amount of data is a challenging problem.
One of the practical difficulties that we often face is to collect many samples.
By using the source model trained with a large-scale dataset, the target model can alleviate the overfitting originated from the lack of training data.
arXiv Detail & Related papers (2020-07-10T06:02:05Z) - Generalized Zero and Few-Shot Transfer for Facial Forgery Detection [3.8073142980733]
We propose a new transfer learning approach to address the problem of zero and few-shot transfer in the context of forgery detection.
We find this learning strategy to be surprisingly effective at domain transfer compared to a traditional classification or even state-of-the-art domain adaptation/few-shot learning methods.
arXiv Detail & Related papers (2020-06-21T18:10:52Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.