FDCNet: Feature Drift Compensation Network for Class-Incremental Weakly
Supervised Object Localization
- URL: http://arxiv.org/abs/2309.09122v1
- Date: Sun, 17 Sep 2023 01:10:45 GMT
- Title: FDCNet: Feature Drift Compensation Network for Class-Incremental Weakly
Supervised Object Localization
- Authors: Sejin Park and Taehyung Lee and Yeejin Lee and Byeongkeun Kang
- Abstract summary: This work addresses the task of class-incremental weakly supervised object localization (CI-WSOL)
The goal is to incrementally learn object localization for novel classes using only image-level annotations while retaining the ability to localize previously learned classes.
We first present a strong baseline method for CI-WSOL by adapting the strategies of class-incremental classifiers to catastrophic forgetting.
We then propose the feature drift compensation network to compensate for the effects of feature drifts on class scores and localization maps.
- Score: 10.08410402383604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses the task of class-incremental weakly supervised object
localization (CI-WSOL). The goal is to incrementally learn object localization
for novel classes using only image-level annotations while retaining the
ability to localize previously learned classes. This task is important because
annotating bounding boxes for every new incoming data is expensive, although
object localization is crucial in various applications. To the best of our
knowledge, we are the first to address this task. Thus, we first present a
strong baseline method for CI-WSOL by adapting the strategies of
class-incremental classifiers to mitigate catastrophic forgetting. These
strategies include applying knowledge distillation, maintaining a small data
set from previous tasks, and using cosine normalization. We then propose the
feature drift compensation network to compensate for the effects of feature
drifts on class scores and localization maps. Since updating network parameters
to learn new tasks causes feature drifts, compensating for the final outputs is
necessary. Finally, we evaluate our proposed method by conducting experiments
on two publicly available datasets (ImageNet-100 and CUB-200). The experimental
results demonstrate that the proposed method outperforms other baseline
methods.
Related papers
- Improving Weakly-Supervised Object Localization Using Adversarial Erasing and Pseudo Label [7.400926717561454]
This paper investigates a framework for weakly-supervised object localization.
It aims to train a neural network capable of predicting both the object class and its location using only images and their image-level class labels.
arXiv Detail & Related papers (2024-04-15T06:02:09Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Improved Region Proposal Network for Enhanced Few-Shot Object Detection [23.871860648919593]
Few-shot object detection (FSOD) methods have emerged as a solution to the limitations of classic object detection approaches.
We develop a semi-supervised algorithm to detect and then utilize unlabeled novel objects as positive samples during the FSOD training stage.
Our improved hierarchical sampling strategy for the region proposal network (RPN) also boosts the perception of the object detection model for large objects.
arXiv Detail & Related papers (2023-08-15T02:35:59Z) - Density Map Distillation for Incremental Object Counting [37.982124268097]
A na"ive approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic performance drop on previous tasks.
We propose a new exemplar-free functional regularization method, called Density Map Distillation (DMD)
During training, we introduce a new counter head for each task and introduce a distillation loss to prevent forgetting of previous tasks.
arXiv Detail & Related papers (2023-04-11T14:46:21Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - On the Exploration of Incremental Learning for Fine-grained Image
Retrieval [45.48333682748607]
We consider the problem of fine-grained image retrieval in an incremental setting, when new categories are added over time.
We propose an incremental learning method to mitigate retrieval performance degradation caused by the forgetting issue.
Our method effectively mitigates the catastrophic forgetting on the original classes while achieving high performance on the new classes.
arXiv Detail & Related papers (2020-10-15T21:07:44Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Semantic Drift Compensation for Class-Incremental Learning [48.749630494026086]
Class-incremental learning of deep networks sequentially increases the number of classes to be classified.
We propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars.
arXiv Detail & Related papers (2020-04-01T13:31:19Z) - Pairwise Similarity Knowledge Transfer for Weakly Supervised Object
Localization [53.99850033746663]
We study the problem of learning localization model on target classes with weakly supervised image labels.
In this work, we argue that learning only an objectness function is a weak form of knowledge transfer.
Experiments on the COCO and ILSVRC 2013 detection datasets show that the performance of the localization model improves significantly with the inclusion of pairwise similarity function.
arXiv Detail & Related papers (2020-03-18T17:53:33Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.