Unsupervised Few-Shot Continual Learning for Remote Sensing Image Scene Classification
- URL: http://arxiv.org/abs/2406.18574v1
- Date: Tue, 4 Jun 2024 03:06:41 GMT
- Title: Unsupervised Few-Shot Continual Learning for Remote Sensing Image Scene Classification
- Authors: Muhammad Anwar Ma'sum, Mahardhika Pratama, Ramasamy Savitha, Lin Liu, Habibullah, Ryszard Kowalczyk,
- Abstract summary: Unsupervised flat-wide learning approach (UNISA) for unsupervised few-shot continual learning approaches of remote sensing image scene classifications.
Our numerical study with remote sensing image scene datasets and a hyperspectral dataset confirms the advantages of our solution.
- Score: 14.758282519523744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A continual learning (CL) model is desired for remote sensing image analysis because of varying camera parameters, spectral ranges, resolutions, etc. There exist some recent initiatives to develop CL techniques in this domain but they still depend on massive labelled samples which do not fully fit remote sensing applications because ground truths are often obtained via field-based surveys. This paper addresses this problem with a proposal of unsupervised flat-wide learning approach (UNISA) for unsupervised few-shot continual learning approaches of remote sensing image scene classifications which do not depend on any labelled samples for its model updates. UNISA is developed from the idea of prototype scattering and positive sampling for learning representations while the catastrophic forgetting problem is tackled with the flat-wide learning approach combined with a ball generator to address the data scarcity problem. Our numerical study with remote sensing image scene datasets and a hyperspectral dataset confirms the advantages of our solution. Source codes of UNISA are shared publicly in \url{https://github.com/anwarmaxsum/UNISA} to allow convenient future studies and reproductions of our numerical results.
Related papers
- Adaptive Domain Learning for Cross-domain Image Denoising [57.4030317607274]
We present a novel adaptive domain learning scheme for cross-domain image denoising.
We use existing data from different sensors (source domain) plus a small amount of data from the new sensor (target domain)
The ADL training scheme automatically removes the data in the source domain that are harmful to fine-tuning a model for the target domain.
Also, we introduce a modulation module to adopt sensor-specific information (sensor type and ISO) to understand input data for image denoising.
arXiv Detail & Related papers (2024-11-03T08:08:26Z) - Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - InfRS: Incremental Few-Shot Object Detection in Remote Sensing Images [11.916941756499435]
In this paper, we explore the intricate task of incremental few-shot object detection in remote sensing images.
We introduce a pioneering fine-tuning-based technique, termed InfRS, designed to facilitate the incremental learning of novel classes.
We develop a prototypical calibration strategy based on the Wasserstein distance to mitigate the catastrophic forgetting problem.
arXiv Detail & Related papers (2024-05-18T13:39:50Z) - Extending global-local view alignment for self-supervised learning with remote sensing imagery [1.5192294544599656]
Self-supervised models acquire general feature representations by formulating a pretext task that generates pseudo-labels for massive unlabeled data.
Inspired by DINO, we formulate two pretext tasks for self-supervised learning on remote sensing imagery (SSLRS)
We extend DINO and propose DINO-MC which uses local views of various sized crops instead of a single fixed size.
arXiv Detail & Related papers (2023-03-12T14:24:10Z) - Towards Effective Image Manipulation Detection with Proposal Contrastive
Learning [61.5469708038966]
We propose Proposal Contrastive Learning (PCL) for effective image manipulation detection.
Our PCL consists of a two-stream architecture by extracting two types of global features from RGB and noise views respectively.
Our PCL can be easily adapted to unlabeled data in practice, which can reduce manual labeling costs and promote more generalizable features.
arXiv Detail & Related papers (2022-10-16T13:30:13Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Sketched Multi-view Subspace Learning for Hyperspectral Anomalous Change
Detection [12.719327447589345]
A sketched multi-view subspace learning model is proposed for anomalous change detection.
The proposed model preserves major information from the image pairs and improves computational complexity.
experiments are conducted on a benchmark hyperspectral remote sensing dataset and a natural hyperspectral dataset.
arXiv Detail & Related papers (2022-10-09T14:08:17Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote
Sensing Data [64.40187171234838]
Seasonal Contrast (SeCo) is an effective pipeline to leverage unlabeled data for in-domain pre-training of re-mote sensing representations.
SeCo will be made public to facilitate transfer learning and enable rapid progress in re-mote sensing applications.
arXiv Detail & Related papers (2021-03-30T18:26:39Z) - Remote Sensing Image Scene Classification with Self-Supervised Paradigm
under Limited Labeled Samples [11.025191332244919]
We introduce new self-supervised learning (SSL) mechanism to obtain the high-performance pre-training model for RSIs scene classification from large unlabeled data.
Experiments on three commonly used RSIs scene classification datasets demonstrated that this new learning paradigm outperforms the traditional dominant ImageNet pre-trained model.
The insights distilled from our studies can help to foster the development of SSL in the remote sensing community.
arXiv Detail & Related papers (2020-10-02T09:27:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.