Augmented Box Replay: Overcoming Foreground Shift for Incremental Object
Detection
- URL: http://arxiv.org/abs/2307.12427v1
- Date: Sun, 23 Jul 2023 20:47:03 GMT
- Title: Augmented Box Replay: Overcoming Foreground Shift for Incremental Object
Detection
- Authors: Liu Yuyang, Cong Yang, Goswami Dipam, Liu Xialei, Joost van de Weijer
- Abstract summary: In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting.
Unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD)
Foreground shift only occurs when replaying images of previous tasks and refers to the fact that their background might contain foreground objects of the current task.
- Score: 26.948748060138264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In incremental learning, replaying stored samples from previous tasks
together with current task samples is one of the most efficient approaches to
address catastrophic forgetting. However, unlike incremental classification,
image replay has not been successfully applied to incremental object detection
(IOD). In this paper, we identify the overlooked problem of foreground shift as
the main reason for this. Foreground shift only occurs when replaying images of
previous tasks and refers to the fact that their background might contain
foreground objects of the current task. To overcome this problem, a novel and
efficient Augmented Box Replay (ABR) method is developed that only stores and
replays foreground objects and thereby circumvents the foreground shift
problem. In addition, we propose an innovative Attentive RoI Distillation loss
that uses spatial attention from region-of-interest (RoI) features to constrain
current model to focus on the most important information from old model. ABR
significantly reduces forgetting of previous classes while maintaining high
plasticity in current classes. Moreover, it considerably reduces the storage
requirements when compared to standard image replay. Comprehensive experiments
on Pascal-VOC and COCO datasets support the state-of-the-art performance of our
model.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - FDCNet: Feature Drift Compensation Network for Class-Incremental Weakly
Supervised Object Localization [10.08410402383604]
This work addresses the task of class-incremental weakly supervised object localization (CI-WSOL)
The goal is to incrementally learn object localization for novel classes using only image-level annotations while retaining the ability to localize previously learned classes.
We first present a strong baseline method for CI-WSOL by adapting the strategies of class-incremental classifiers to catastrophic forgetting.
We then propose the feature drift compensation network to compensate for the effects of feature drifts on class scores and localization maps.
arXiv Detail & Related papers (2023-09-17T01:10:45Z) - Rethinking the Localization in Weakly Supervised Object Localization [51.29084037301646]
Weakly supervised object localization (WSOL) is one of the most popular and challenging tasks in computer vision.
Recent dividing WSOL into two parts (class-agnostic object localization and object classification) has become the state-of-the-art pipeline for this task.
We propose to replace SCR with a binary-class detector (BCD) for localizing multiple objects, where the detector is trained by discriminating the foreground and background.
arXiv Detail & Related papers (2023-08-11T14:38:51Z) - Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic
Forgetting in Reinforcement Learning [15.771773131031054]
Deep Reinforcement Learning agents often suffer from catastrophic forgetting, forgetting previously found solutions in parts of the input space when training on new data.
We introduce a novel cognitive-inspired replay memory approach based on the Grow-When-Required (GWR) self-organizing network.
Our approach organizes stored transitions into a concise environment-model-like network of state-nodes and transition-edges, merging similar samples to reduce the memory size and increase pair-wise distance among samples.
arXiv Detail & Related papers (2023-05-03T11:39:31Z) - Density Map Distillation for Incremental Object Counting [37.982124268097]
A na"ive approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic performance drop on previous tasks.
We propose a new exemplar-free functional regularization method, called Density Map Distillation (DMD)
During training, we introduce a new counter head for each task and introduce a distillation loss to prevent forgetting of previous tasks.
arXiv Detail & Related papers (2023-04-11T14:46:21Z) - Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning [60.501201259732625]
We introduce task-adaptive saliency for EFCIL and propose a new framework, which we call Task-Adaptive Saliency Supervision (TASS)
Our experiments demonstrate that our method can better preserve saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100, Tiny-ImageNet, and ImageNet-Subset EFCIL benchmarks.
arXiv Detail & Related papers (2022-12-16T02:43:52Z) - Learning to Detect Every Thing in an Open World [139.78830329914135]
We propose a simple yet surprisingly powerful data augmentation and training scheme we call Learning to Detect Every Thing (LDET)
To avoid suppressing hidden objects, background objects that are visible but unlabeled, we paste annotated objects on a background image sampled from a small region of the original image.
LDET leads to significant improvements on many datasets in the open world instance segmentation task.
arXiv Detail & Related papers (2021-12-03T03:56:06Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - The Effectiveness of Memory Replay in Large Scale Continual Learning [42.67483945072039]
We study continual learning in the large scale setting where tasks in the input sequence are not limited to classification, and the outputs can be of high dimension.
Existing methods usually replay only the input-output pairs.
We propose to replay the activation of the intermediate layers in addition to the input-output pairs.
arXiv Detail & Related papers (2020-10-06T01:23:12Z) - Generative Feature Replay For Class-Incremental Learning [46.88667212214957]
We consider a class-incremental setting which means that the task-ID is unknown at inference time.
The imbalance between old and new classes typically results in a bias of the network towards the newest ones.
We propose a solution based on generative feature replay which does not require any exemplars.
arXiv Detail & Related papers (2020-04-20T10:58:20Z) - Solving Missing-Annotation Object Detection with Background
Recalibration Loss [49.42997894751021]
This paper focuses on a novel and challenging detection scenario: A majority of true objects/instances is unlabeled in the datasets.
Previous art has proposed to use soft sampling to re-weight the gradients of RoIs based on the overlaps with positive instances, while their method is mainly based on the two-stage detector.
In this paper, we introduce a superior solution called Background Recalibration Loss (BRL) that can automatically re-calibrate the loss signals according to the pre-defined IoU threshold and input image.
arXiv Detail & Related papers (2020-02-12T23:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.