Contrast R-CNN for Continual Learning in Object Detection
- URL: http://arxiv.org/abs/2108.04224v1
- Date: Sun, 11 Jul 2021 14:09:10 GMT
- Title: Contrast R-CNN for Continual Learning in Object Detection
- Authors: Kai Zheng, Cen Chen
- Abstract summary: We propose a new scheme for continual learning of object detection, namely Contrast R-CNN.
In our paper, we propose a new scheme for continual learning of object detection, namely Contrast R-CNN.
- Score: 13.79299067527118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The continual learning problem has been widely studied in image
classification, while rare work has been explored in object detection. Some
recent works apply knowledge distillation to constrain the model to retain old
knowledge, but this rigid constraint is detrimental for learning new knowledge.
In our paper, we propose a new scheme for continual learning of object
detection, namely Contrast R-CNN, an approach strikes a balance between
retaining the old knowledge and learning the new knowledge. Furthermore, we
design a Proposal Contrast to eliminate the ambiguity between old and new
instance to make the continual learning more robust. Extensive evaluation on
the PASCAL VOC dataset demonstrates the effectiveness of our approach.
Related papers
- Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - SRIL: Selective Regularization for Class-Incremental Learning [5.810252620242912]
Class-Incremental Learning aims to create an integrated model that balances plasticity and stability to overcome this challenge.
We propose a selective regularization method that accepts new knowledge while maintaining previous knowledge.
We validate the effectiveness of the proposed method through extensive experimental protocols using CIFAR-100, ImageNet-Subset, and ImageNet-Full.
arXiv Detail & Related papers (2023-05-09T05:04:35Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Multi-View Correlation Distillation for Incremental Object Detection [12.536640582318949]
We propose a novel textbfMulti-textbfView textbfCorrelation textbfDistillation (MVCD) based incremental object detection method.
arXiv Detail & Related papers (2021-07-05T04:36:33Z) - Split-and-Bridge: Adaptable Class Incremental Learning within a Single
Neural Network [0.20305676256390928]
Continual learning is a major problem in the deep learning community.
In this paper, we propose a novel continual learning method, called Split-and-Bridge.
arXiv Detail & Related papers (2021-07-03T05:51:53Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Multiband VAE: Latent Space Partitioning for Knowledge Consolidation in
Continual Learning [14.226973149346883]
Acquiring knowledge about new data samples without forgetting previous ones is a critical problem of continual learning.
We propose a new method for unsupervised continual knowledge consolidation in generative models that relies on the partitioning of Variational Autoencoder's latent space.
On top of the standard continual learning evaluation benchmarks, we evaluate our method on a new knowledge consolidation scenario and show that the proposed approach outperforms state-of-the-art by up to twofold.
arXiv Detail & Related papers (2021-06-23T06:58:40Z) - Preserving Earlier Knowledge in Continual Learning with the Help of All
Previous Feature Extractors [63.21036904487014]
Continual learning of new knowledge over time is one desirable capability for intelligent systems to recognize more and more classes of objects.
We propose a simple yet effective fusion mechanism by including all the previously learned feature extractors into the intelligent model.
Experiments on multiple classification tasks show that the proposed approach can effectively reduce the forgetting of old knowledge, achieving state-of-the-art continual learning performance.
arXiv Detail & Related papers (2021-04-28T07:49:24Z) - SID: Incremental Learning for Anchor-Free Object Detection via Selective
and Inter-Related Distillation [16.281712605385316]
Incremental learning requires a model to continually learn new tasks from streaming data.
Traditional fine-tuning of a well-trained deep neural network on a new task will dramatically degrade performance on the old task.
We propose a novel incremental learning paradigm called Selective and Inter-related Distillation (SID)
arXiv Detail & Related papers (2020-12-31T04:12:06Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.