Efficient Curriculum based Continual Learning with Informative Subset
Selection for Remote Sensing Scene Classification
- URL: http://arxiv.org/abs/2309.01050v1
- Date: Sun, 3 Sep 2023 01:25:40 GMT
- Title: Efficient Curriculum based Continual Learning with Informative Subset
Selection for Remote Sensing Scene Classification
- Authors: S Divakar Bhat, Biplab Banerjee, Subhasis Chaudhuri, Avik Bhattacharya
- Abstract summary: We tackle the problem of class incremental learning (CIL) in the realm of landcover classification from optical remote sensing (RS) images.
We propose a novel CIL framework inspired by the recent success of replay-memory based approaches.
- Score: 27.456319725214474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We tackle the problem of class incremental learning (CIL) in the realm of
landcover classification from optical remote sensing (RS) images in this paper.
The paradigm of CIL has recently gained much prominence given the fact that
data are generally obtained in a sequential manner for real-world phenomenon.
However, CIL has not been extensively considered yet in the domain of RS
irrespective of the fact that the satellites tend to discover new classes at
different geographical locations temporally. With this motivation, we propose a
novel CIL framework inspired by the recent success of replay-memory based
approaches and tackling two of their shortcomings. In order to reduce the
effect of catastrophic forgetting of the old classes when a new stream arrives,
we learn a curriculum of the new classes based on their similarity with the old
classes. This is found to limit the degree of forgetting substantially. Next
while constructing the replay memory, instead of randomly selecting samples
from the old streams, we propose a sample selection strategy which ensures the
selection of highly confident samples so as to reduce the effects of noise. We
observe a sharp improvement in the CIL performance with the proposed
components. Experimental results on the benchmark NWPU-RESISC45, PatternNet,
and EuroSAT datasets confirm that our method offers improved
stability-plasticity trade-off than the literature.
Related papers
- Happy: A Debiased Learning Framework for Continual Generalized Category Discovery [54.54153155039062]
This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD)
C-GCD aims to incrementally discover new classes from unlabeled data while maintaining the ability to recognize previously learned classes.
We introduce a debiased learning framework, namely Happy, characterized by Hardness-aware prototype sampling and soft entropy regularization.
arXiv Detail & Related papers (2024-10-09T04:18:51Z) - Exemplar-Free Class Incremental Learning via Incremental Representation [26.759108983223115]
We propose a textbfsimple Incremental Representation (IR) framework for efCIL without constructing old pseudo-features.
IR utilizes dataset augmentation to cover a suitable feature space and prevents the model from forgetting by using a single L2 space maintenance loss.
arXiv Detail & Related papers (2024-03-24T16:29:50Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Fast Hierarchical Learning for Few-Shot Object Detection [57.024072600597464]
Transfer learning approaches have recently achieved promising results on the few-shot detection task.
These approaches suffer from catastrophic forgetting'' issue due to finetuning of base detector.
We tackle the aforementioned issues in this work.
arXiv Detail & Related papers (2022-10-10T20:31:19Z) - Improving Replay-Based Continual Semantic Segmentation with Smart Data
Selection [0.0]
We investigate the influences of various replay strategies for semantic segmentation and evaluate them in class- and domain-incremental settings.
Our findings suggest that in a class-incremental setting, it is critical to achieve a uniform distribution for the different classes in the buffer.
In the domain-incremental setting, it is most effective to select buffer samples by uniformly sampling from the distribution of learned feature representations or by choosing samples with median entropy.
arXiv Detail & Related papers (2022-09-20T16:32:06Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - Lifelong Object Detection [28.608982224098565]
We leverage the fact that new training classes arrive in a sequential manner and incrementally refine the model.
We consider the representative object detector, Faster R-CNN, for both accurate and efficient prediction.
arXiv Detail & Related papers (2020-09-02T15:08:51Z) - Novelty Detection via Non-Adversarial Generative Network [47.375591404354765]
A novel decoder-encoder framework is proposed for novelty detection task.
Under the non-adversarial framework, both latent space and image reconstruction space are jointly optimized.
Our model has the clear superiority over cutting-edge novelty detectors and achieves the state-of-the-art results on the datasets.
arXiv Detail & Related papers (2020-02-03T01:05:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.