SID: Incremental Learning for Anchor-Free Object Detection via Selective
and Inter-Related Distillation
- URL: http://arxiv.org/abs/2012.15439v1
- Date: Thu, 31 Dec 2020 04:12:06 GMT
- Title: SID: Incremental Learning for Anchor-Free Object Detection via Selective
and Inter-Related Distillation
- Authors: Can Peng, Kun Zhao, Sam Maksoud, Meng Li, Brian C. Lovell
- Abstract summary: Incremental learning requires a model to continually learn new tasks from streaming data.
Traditional fine-tuning of a well-trained deep neural network on a new task will dramatically degrade performance on the old task.
We propose a novel incremental learning paradigm called Selective and Inter-related Distillation (SID)
- Score: 16.281712605385316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental learning requires a model to continually learn new tasks from
streaming data. However, traditional fine-tuning of a well-trained deep neural
network on a new task will dramatically degrade performance on the old task --
a problem known as catastrophic forgetting. In this paper, we address this
issue in the context of anchor-free object detection, which is a new trend in
computer vision as it is simple, fast, and flexible. Simply adapting current
incremental learning strategies fails on these anchor-free detectors due to
lack of consideration of their specific model structures. To deal with the
challenges of incremental learning on anchor-free object detectors, we propose
a novel incremental learning paradigm called Selective and Inter-related
Distillation (SID). In addition, a novel evaluation metric is proposed to
better assess the performance of detectors under incremental learning
conditions. By selective distilling at the proper locations and further
transferring additional instance relation knowledge, our method demonstrates
significant advantages on the benchmark datasets PASCAL VOC and COCO.
Related papers
- Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Towards Generalized and Incremental Few-Shot Object Detection [9.033533653482529]
A novel Incremental Few-Shot Object Detection (iFSOD) method is proposed to enable the effective continual learning from few-shot samples.
Specifically, a Double-Branch Framework (DBF) is proposed to decouple the feature representation of base and novel (few-shot) class.
We conduct experiments on both Pascal VOC and MS-COCO, which demonstrate that our method can effectively solve the problem of incremental few-shot detection.
arXiv Detail & Related papers (2021-09-23T12:38:09Z) - Multi-View Correlation Distillation for Incremental Object Detection [12.536640582318949]
We propose a novel textbfMulti-textbfView textbfCorrelation textbfDistillation (MVCD) based incremental object detection method.
arXiv Detail & Related papers (2021-07-05T04:36:33Z) - Data-efficient Weakly-supervised Learning for On-line Object Detection
under Domain Shift in Robotics [24.878465999976594]
Several object detection methods have been proposed in the literature, the vast majority based on Deep Convolutional Neural Networks (DCNNs)
These methods have important limitations for robotics: Learning solely on off-line data may introduce biases, and prevents adaptation to novel tasks.
In this work, we investigate how weakly-supervised learning can cope with these problems.
arXiv Detail & Related papers (2020-12-28T16:36:11Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.