DIODE: Dilatable Incremental Object Detection
- URL: http://arxiv.org/abs/2108.05627v1
- Date: Thu, 12 Aug 2021 09:45:57 GMT
- Title: DIODE: Dilatable Incremental Object Detection
- Authors: Can Peng, Kun Zhao, Sam Maksoud, Tianren Wang, Brian C. Lovell
- Abstract summary: Conventional deep learning models lack the capability of preserving previously learned knowledge.
We propose a dilatable incremental object detector (DIODE) for multi-step incremental detection tasks.
Our method achieves up to 6.4% performance improvement by increasing the number of parameters by just 1.2% for each newly learned task.
- Score: 15.59425584971872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To accommodate rapid changes in the real world, the cognition system of
humans is capable of continually learning concepts. On the contrary,
conventional deep learning models lack this capability of preserving previously
learned knowledge. When a neural network is fine-tuned to learn new tasks, its
performance on previously trained tasks will significantly deteriorate. Many
recent works on incremental object detection tackle this problem by introducing
advanced regularization. Although these methods have shown promising results,
the benefits are often short-lived after the first incremental step. Under
multi-step incremental learning, the trade-off between old knowledge preserving
and new task learning becomes progressively more severe. Thus, the performance
of regularization-based incremental object detectors gradually decays for
subsequent learning steps. In this paper, we aim to alleviate this performance
decay on multi-step incremental detection tasks by proposing a dilatable
incremental object detector (DIODE). For the task-shared parameters, our method
adaptively penalizes the changes of important weights for previous tasks. At
the same time, the structure of the model is dilated or expanded by a limited
number of task-specific parameters to promote new task learning. Extensive
experiments on PASCAL VOC and COCO datasets demonstrate substantial
improvements over the state-of-the-art methods. Notably, compared with the
state-of-the-art methods, our method achieves up to 6.0% performance
improvement by increasing the number of parameters by just 1.2% for each newly
learned task.
Related papers
- Reducing catastrophic forgetting of incremental learning in the absence of rehearsal memory with task-specific token [0.6144680854063939]
Deep learning models display catastrophic forgetting when learning new data continuously.
We present a novel method that preserves previous knowledge without storing previous data.
This method is inspired by the architecture of a vision transformer and employs a unique token capable of encapsulating the compressed knowledge of each task.
arXiv Detail & Related papers (2024-11-06T16:13:50Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Density Map Distillation for Incremental Object Counting [37.982124268097]
A na"ive approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic performance drop on previous tasks.
We propose a new exemplar-free functional regularization method, called Density Map Distillation (DMD)
During training, we introduce a new counter head for each task and introduce a distillation loss to prevent forgetting of previous tasks.
arXiv Detail & Related papers (2023-04-11T14:46:21Z) - Energy-based Latent Aligner for Incremental Learning [83.0135278697976]
Deep learning models tend to forget their earlier knowledge while incrementally learning new tasks.
This behavior emerges because the parameter updates optimized for the new tasks may not align well with the updates suitable for older tasks.
We propose ELI: Energy-based Latent Aligner for Incremental Learning.
arXiv Detail & Related papers (2022-03-28T17:57:25Z) - Continual learning of quantum state classification with gradient
episodic memory [0.20646127669654826]
A phenomenon called catastrophic forgetting emerges when a machine learning model is trained across multiple tasks.
Some continual learning strategies have been proposed to address the catastrophic forgetting problem.
In this work, we incorporate the gradient episodic memory method to train a variational quantum classifier.
arXiv Detail & Related papers (2022-03-26T09:28:26Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Continual Learning via Bit-Level Information Preserving [88.32450740325005]
We study the continual learning process through the lens of information theory.
We propose Bit-Level Information Preserving (BLIP) that preserves the information gain on model parameters.
BLIP achieves close to zero forgetting while only requiring constant memory overheads throughout continual learning.
arXiv Detail & Related papers (2021-05-10T15:09:01Z) - Class-incremental learning: survey and performance evaluation on image
classification [38.27344435075399]
Incremental learning allows for efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data.
The main challenge for incremental learning is catastrophic forgetting, which refers to the precipitous drop in performance on previously learned tasks after learning a new one.
Recently, we have seen a shift towards class-incremental learning where the learner must discriminate at inference time between all classes seen in previous tasks without recourse to a task-ID.
arXiv Detail & Related papers (2020-10-28T23:28:15Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.