Faster ILOD: Incremental Learning for Object Detectors based on Faster
RCNN
- URL: http://arxiv.org/abs/2003.03901v2
- Date: Wed, 7 Oct 2020 00:42:06 GMT
- Title: Faster ILOD: Incremental Learning for Object Detectors based on Faster
RCNN
- Authors: Can Peng, Kun Zhao and Brian C. Lovell
- Abstract summary: We design an efficient end-to-end incremental object detector using knowledge distillation.
Experiments on the benchmark datasets, PASCAL VOC and COCO, demonstrate that the proposed incremental detector based on Faster RCNN is more accurate as well as being 13 times faster than the baseline detector.
- Score: 10.601349348583852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human vision and perception system is inherently incremental where new
knowledge is continually learned over time whilst existing knowledge is
retained. On the other hand, deep learning networks are ill-equipped for
incremental learning. When a well-trained network is adapted to new categories,
its performance on the old categories will dramatically degrade. To address
this problem, incremental learning methods have been explored which preserve
the old knowledge of deep learning models. However, the state-of-the-art
incremental object detector employs an external fixed region proposal method
that increases overall computation time and reduces accuracy comparing to
Region Proposal Network (RPN) based object detectors such as Faster RCNN. The
purpose of this paper is to design an efficient end-to-end incremental object
detector using knowledge distillation. We first evaluate and analyze the
performance of the RPN-based detector with classic distillation on incremental
detection tasks. Then, we introduce multi-network adaptive distillation that
properly retains knowledge from the old categories when fine-tuning the model
for new task. Experiments on the benchmark datasets, PASCAL VOC and COCO,
demonstrate that the proposed incremental detector based on Faster RCNN is more
accurate as well as being 13 times faster than the baseline detector.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - Deep Learning Algorithms Used in Intrusion Detection Systems -- A Review [0.0]
This review paper studies recent advancements in the application of deep learning techniques, including CNN, Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Deep Neural Networks (DNN), Long Short-Term Memory (LSTM), autoencoders (AE), Multi-Layer Perceptrons (MLP), Self-Normalizing Networks (SNN) and hybrid models, within network intrusion detection systems.
arXiv Detail & Related papers (2024-02-26T20:57:35Z) - Dense Network Expansion for Class Incremental Learning [61.00081795200547]
State-of-the-art approaches use a dynamic architecture based on network expansion (NE), in which a task expert is added per task.
A new NE method, dense network expansion (DNE), is proposed to achieve a better trade-off between accuracy and model complexity.
It outperforms the previous SOTA methods by a margin of 4% in terms of accuracy, with similar or even smaller model scale.
arXiv Detail & Related papers (2023-03-22T16:42:26Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Energy-based Latent Aligner for Incremental Learning [83.0135278697976]
Deep learning models tend to forget their earlier knowledge while incrementally learning new tasks.
This behavior emerges because the parameter updates optimized for the new tasks may not align well with the updates suitable for older tasks.
We propose ELI: Energy-based Latent Aligner for Incremental Learning.
arXiv Detail & Related papers (2022-03-28T17:57:25Z) - SID: Incremental Learning for Anchor-Free Object Detection via Selective
and Inter-Related Distillation [16.281712605385316]
Incremental learning requires a model to continually learn new tasks from streaming data.
Traditional fine-tuning of a well-trained deep neural network on a new task will dramatically degrade performance on the old task.
We propose a novel incremental learning paradigm called Selective and Inter-related Distillation (SID)
arXiv Detail & Related papers (2020-12-31T04:12:06Z) - Lifelong Object Detection [28.608982224098565]
We leverage the fact that new training classes arrive in a sequential manner and incrementally refine the model.
We consider the representative object detector, Faster R-CNN, for both accurate and efficient prediction.
arXiv Detail & Related papers (2020-09-02T15:08:51Z) - Two-Level Residual Distillation based Triple Network for Incremental
Object Detection [21.725878050355824]
We propose a novel incremental object detector based on Faster R-CNN to continuously learn from new object classes without using old data.
It is a triple network where an old model and a residual model as assistants for helping the incremental model learning on new classes without forgetting the previous learned knowledge.
arXiv Detail & Related papers (2020-07-27T11:04:57Z) - RIFLE: Backpropagation in Depth for Deep Transfer Learning through
Re-Initializing the Fully-connected LayEr [60.07531696857743]
Fine-tuning the deep convolution neural network(CNN) using a pre-trained model helps transfer knowledge learned from larger datasets to the target task.
We propose RIFLE - a strategy that deepens backpropagation in transfer learning settings.
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning.
arXiv Detail & Related papers (2020-07-07T11:27:43Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z) - Depthwise Non-local Module for Fast Salient Object Detection Using a
Single Thread [136.2224792151324]
We propose a new deep learning algorithm for fast salient object detection.
The proposed algorithm achieves competitive accuracy and high inference efficiency simultaneously with a single CPU thread.
arXiv Detail & Related papers (2020-01-22T15:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.